Skip to main content

Custom Prompter Model

Integrate a prompter model into an LLM workflow


A prompter model is a type of language model specifically designed to craft instructions that guide the output of large language models (LLMs). It helps in prompt engineering, focusing on optimizing the responses of LLMs to prompts.

Let's demonstrate how you can create your own prompter model and connect it to an LLM in a workflow.

info

The initialization code used in the following examples is outlined in detail on the client installation page.

Create a Prompter Model

##########################################################################################
# In this section, we set the user authentication, app ID, model ID, and model type ID.
# Change these strings to run your own example.
#########################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to create your own model
MODEL_ID = 'my-prompter-model'
MODEL_TYPE_ID = 'prompter'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_models_response = stub.PostModels(
service_pb2.PostModelsRequest(
user_app_id=userDataObject,
models=[
resources_pb2.Model(
id=MODEL_ID,
model_type_id=MODEL_TYPE_ID
)
]
),
metadata=metadata
)

if post_models_response.status.code != status_code_pb2.SUCCESS:
print(post_models_response.status)
raise Exception("Post models failed, status: " + post_models_response.status.description)
Text Output Example
Predicted output for the model: `my-prompter-model`
Classify whether the sentiment of the given text is positive or negative I love your product very much

Predicted output for the model: `GPT-4`
The sentiment of the given text is positive.

Train a Prompter Model

When training a prompter model, you need to provide a prompt template, which serves as a pre-configured piece of text for instructing an LLM.

Note that your prompt template should include at least one instance of the placeholder {data.text.raw}. When you input your text data at inference time, all occurrences of {data.text.raw} within the template will be replaced with the provided text.

###############################################################################################
# In this section, we set the user authentication, app ID, model ID, and prompter details.
# Change these strings to run your own example.
###############################################################################################

USER_ID = "YOUR_USER_ID_HERE"
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = "YOUR_PAT_HERE"
APP_ID = "YOUR_APP_ID_HERE"
# Change these to train your own model
MODEL_ID = "my-prompter-model"
PROMPTER_DESCRIPTION = "Positive or negative sentiment classifier prompter"
PROMPT_TEMPLATE = "Classify whether the sentiment of the given text is positive or negative {data.text.raw}"

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2
from google.protobuf.struct_pb2 import Struct

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

params = Struct()
params.update({
"prompt_template": PROMPT_TEMPLATE
})

metadata = (("authorization", "Key " + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

post_model_versions = stub.PostModelVersions(
service_pb2.PostModelVersionsRequest(
user_app_id=userDataObject,
model_id=MODEL_ID,
description=PROMPTER_DESCRIPTION,
model_versions=[
resources_pb2.ModelVersion(
output_info=resources_pb2.OutputInfo(params=params)
)
],
),
metadata=metadata,
)

if post_model_versions.status.code != status_code_pb2.SUCCESS:
print(post_model_versions.status)
raise Exception("Post models versions failed, status: " + post_model_versions.status.description)

print(post_model_versions)

Add to a Workflow

After training your prompter model, you can now put it to work by integrating it into an LLM workflow and using it to accomplish various tasks.

Below is an example of how to connect a prompter model to an LLM like GPT-4 for text-to-text tasks.

########################################################################################
# In this section, we set the user authentication, app ID, and the details of the new
# custom workflow. Change these strings to run your own example.
########################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to create your own custom workflow
WORKFLOW_ID = 'my-custom-prompter-workflow'

NODE_ID_1 = 'prompter-model'
PROMPTER_MODEL_ID = 'my-prompter-model'
PROMPTER_MODEL_USER_ID = 'YOUR_USER_ID_HERE'
PROMPTER_MODEL_APP_ID = 'my-custom-app'
PROMPTER_MODEL_VERSION_ID = 'e851fb99a3b14df788ce11accee45c19'

NODE_ID_2 = 'text-to-text'
LLM_MODEL_ID = 'GPT-4'
LLM_MODEL_USER_ID = 'openai'
LLM_MODEL_APP_ID = 'chat-completion'
LLM_MODEL_VERSION = '5d7a50b44aec4a01a9c492c5a5fcf387'

##########################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
##########################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (('authorization', 'Key ' + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID) # The userDataObject is required when using a PAT

post_workflows_response = stub.PostWorkflows(
service_pb2.PostWorkflowsRequest(
user_app_id=userDataObject,
workflows=[
resources_pb2.Workflow(
id=WORKFLOW_ID,
nodes=[
resources_pb2.WorkflowNode(
id=NODE_ID_1,
model=resources_pb2.Model(
id=PROMPTER_MODEL_ID,
user_id=PROMPTER_MODEL_USER_ID,
app_id=PROMPTER_MODEL_APP_ID,
model_version=resources_pb2.ModelVersion(
id=PROMPTER_MODEL_VERSION_ID
)
)
),
resources_pb2.WorkflowNode(
id=NODE_ID_2,
model=resources_pb2.Model(
id=LLM_MODEL_ID,
user_id=LLM_MODEL_USER_ID,
app_id=LLM_MODEL_APP_ID,
model_version=resources_pb2.ModelVersion(
id=LLM_MODEL_VERSION
)
),
node_inputs=[
resources_pb2.NodeInput(node_id=NODE_ID_1)
]
),
]
)
]
),
metadata=metadata
)

if post_workflows_response.status.code != status_code_pb2.SUCCESS:
print(post_workflows_response.status)
raise Exception("Post workflows failed, status: " + post_workflows_response.status.description)

print(post_workflows_response)

Workflow Predict

After creating the workflow, let's now use it to perform a text sentiment prediction task.

######################################################################################################
# In this section, we set the user authentication, app ID, workflow ID, and the text
# we want as an input. Change these strings to run your own example.
######################################################################################################

USER_ID = 'YOUR_USER_ID_HERE'
# Your PAT (Personal Access Token) can be found in the Account's Security section
PAT = 'YOUR_PAT_HERE'
APP_ID = 'YOUR_APP_ID_HERE'
# Change these to make your own predictions
WORKFLOW_ID = "my-custom-prompter-workflow"
RAW_TEXT = "I love your product very much"
# To use a hosted text file, assign the URL variable
# TEXT_FILE_URL = "https://samples.clarifai.com/negative_sentence_12.txt"
# Or, to use a local text file, assign the location variable
# TEXT_FILE_LOCATION = "YOUR_TEXT_FILE_LOCATION_HERE"

############################################################################
# YOU DO NOT NEED TO CHANGE ANYTHING BELOW THIS LINE TO RUN THIS EXAMPLE
############################################################################

from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
from clarifai_grpc.grpc.api.status import status_code_pb2

channel = ClarifaiChannel.get_grpc_channel()
stub = service_pb2_grpc.V2Stub(channel)

metadata = (("authorization", "Key " + PAT),)

userDataObject = resources_pb2.UserAppIDSet(user_id=USER_ID, app_id=APP_ID)

# To use a local text file, uncomment the following lines
# with open(TEXT_FILE_LOCATION, "rb") as f:
# file_bytes = f.read()

post_workflow_results_response = stub.PostWorkflowResults(
service_pb2.PostWorkflowResultsRequest(
user_app_id=userDataObject,
workflow_id=WORKFLOW_ID,
inputs=[
resources_pb2.Input(
data=resources_pb2.Data(
text=resources_pb2.Text(
raw=RAW_TEXT
# url=TEXT_FILE_URL
# raw=file_bytes
)
)
)
],
),
metadata=metadata,
)
if post_workflow_results_response.status.code != status_code_pb2.SUCCESS:
print(post_workflow_results_response.status)
raise Exception("Post workflow results failed, status: " + post_workflow_results_response.status.description)

# We'll get one WorkflowResult for each input we used above. Because of one input, we have here one WorkflowResult
results = post_workflow_results_response.results[0]

# Each model we have in the workflow will produce one output.
for output in results.outputs:
model = output.model

print("Predicted output for the model: `%s`" % model.id)
print(output.data.text.raw)

# Uncomment this line to print the raw output
# print(results)
Text Output Example
Predicted output for the model: `my-prompter-model`
Classify whether the sentiment of the given text is positive or negative I love your product very much

Predicted output for the model: `GPT-4`
The sentiment of the given text is positive.

As you can see on the output above, the response contains the predictions of each model in the workflow. The prompt text starts with the earlier provided template text, and the {data.text.raw} placeholder is substituted with the provided input text. That is what is used as a prompt for the GPT-4 model.

And the model correctly predicts the sentiment of the provided input text.