Skip to main content

Deploy Open-Source MCP Servers

Upload, deploy, and interact with open-source MCP servers on Clarifai


Besides building a custom MCP (Model Context Protocol) server for the Clarifai platform, you can also upload any open-source MCP server and expose it as a managed API endpoint — just like any model in the platform.

You can easily integrate third-party MCP servers with Clarifai by simply adding the mcp_server configuration to your config.yaml file.

This allows you to:

  • Expose MCP servers as HTTP APIs accessible through Clarifai
  • Use the FastMCP client to interact with deployed MCP servers
  • Seamlessly integrate MCP tools with LLMs to extend model capabilities

Step 1: Perform Prerequisites

Get an MCP Server

You can get open-source MCP servers from third-party repositories, such as mcpservers.org or mcp.so.

objective

For this example, let's use the DuckDuckGo MCP server to demonstrate how to upload and deploy an open-source MCP server on Clarifai. This server provides tools for web search, browsing, and information retrieval, and requires no authentication tokens or secrets — making it easy to deploy and use. You can also follow its tutorial here.

Get an Agentic Model

Integrating large language models (LLMs) with MCP servers enables agentic capabilities, allowing models to discover and use external tools autonomously to complete tasks. MCP servers expose functionalities that models can invoke as function-calling tools during conversations.

With MCP server integration, an agentic model can iteratively discover tools, execute them, and reason over the results to produce more capable and context-aware responses.

Note: For a model to support agentic behavior through MCP servers on the Clarifai platform, it must extend the standard OpenAIModelClass with the AgenticModelClass. This enables:

  • Tool discovery and execution handled by the agentic model class
  • Iterative tool calling within a single predict or generate request
  • Compatibility with the OpenAI-compatible API and Clarifai SDKs
  • Support for both streaming and non-streaming modes

You can see an example implementation of AgenticModelClass in this 1/model.py file.

tip

To upload a model with agentic capabilities, simply use the AgenticModelClass — all other functionalities and steps remain the same as uploading a standard model on Clarifai. You can follow this example.

These are some example models with agentic capabilities enabled:

Install Packages

Install the following Python packages to work with the DuckDuckGo Browser MCP server:

  • clarifai — The latest version of the Clarifai Python SDK, required to integrate your MCP server with the Clarifai platform. This package also comes with the Clarifai Command Line Interface (CLI), which you’ll use to upload the server.
  • fastmcp — The core framework for interacting with MCP servers.
  • openai — This leverage Clarifai’s OpenAI-compatible endpoint endpoint to run inferences using the OpenAI client library
  • anyio — An asynchronous I/O library used by FastMCP.
  • requests — A lightweight HTTP client for making HTTP requests.
  • mcp — The Model Context Protocol library.

You can run the following command to install them:

pip install --upgrade clarifai fastmcp openai anyio requests mcp

Get Credentials

You need to have the following Clarifai credentials:

  • App ID — Create a Clarifai application and get its ID. This is where your MCP server will reside on the Clarifai platform.
  • User ID — In the collapsible left sidebar, select Settings and choose Account from the dropdown list. Then, locate your user ID.
  • Personal Access Token (PAT) — From the same Settings option, choose Secrets to generate or copy your PAT. This token is used to authenticate your connection with the Clarifai platform.

Then, set the CLARIFAI_PAT as an environment variable.

export CLARIFAI_PAT=YOUR_PERSONAL_ACCESS_TOKEN_HERE

Create Files

On the Clarifai platform, MCP servers are treated just like models and follow the same underlying architecture.

To upload an MCP server, you need to create a project directory and organize your files according to Clarifai’s custom model requirements, as shown below:

your_model_directory/
├── 1/
│ └── model.py
├── requirements.txt
├── config.yaml
├── Dockerfile
└── client.py
  • your_model_directory/ — The root directory containing all files related to your MCP server.
    • 1/ — A required subdirectory that contains the model implementation (note that the folder name is 1).
      • model.py — Implements the core logic of the MCP server.
    • requirements.txt — Specifies the Python dependencies required to run the server.
    • config.yaml — Defines metadata and configuration settings used when uploading the MCP server to Clarifai.
    • Dockerfile — Defines the runtime environment used to build and run your MCP server on the Clarifai platform.
    • client.py — An example client script you can use to interact with the MCP server after it has been uploaded.

Create a Cluster and Nodepool

You'll need to deploy your MCP server to a dedicated compute cluster and nodepool. This action provisions the necessary resources to run your server and handle requests efficiently.

Learn how to create a cluster and nodepool here.

Note: Ensure that your cluster and nodepool meet the compute resource requirements specified in your config.yaml file.

Step 2: Prepare model.py File

When building a custom MCP server from scratch, model.py is where you implement the server’s core logic, including defining and exposing tools.

However, when uploading an open-source MCP server, you do not need to reimplement this logic. Instead, you only need to define a class that inherits from StdioMCPModelClass, which is designed to run and manage stdio-based MCP servers, such as the Browser MCP server.

Here is the model.py file that defines a StdioMCPModelClass for the Browser MCP server:

from clarifai.runners.models.stdio_mcp_class import StdioMCPModelClass

class BrowserMCPServerClass(StdioMCPModelClass):
pass

The StdioMCPModelClass abstracts away the complexity of managing stdio-based MCP servers. By inheriting from it and configuring the mcp_server section in config.yaml, your server is ready to be uploaded.

Specifically, StdioMCPModelClass automatically:

  • Starts the MCP server as a stdio process
  • Discovers all available MCP tools
  • Exposes the tools through an HTTP API
  • Handles tool execution and response formatting

This makes it easy to deploy open-source MCP servers on Clarifai with minimal code.

Step 3: Prepare the config.yaml File

The config.yaml file defines the build, deployment, and runtime configuration for a custom model — or, in this case, an MCP server — on the Clarifai platform. It tells Clarifai how to build the execution environment and where the server should live within your account.

When integrating an open-source MCP server, this file is also where you specify the server’s mcp_server configuration, which you can obtain from the repository where it's hosted.

Here is the config.yaml file for the Browser MCP server:

model:
id: browser-mcp-server
app_id: app_id
user_id: user_id
model_type_id: mcp

build_info:
python_version: "3.11"

inference_compute_info:
cpu_limit: 1000m
cpu_memory: 1Gi
num_accelerators: 0

mcp_server:
command: "uvx"
args: ["duckduckgo-mcp-server"]

Let’s break down what each part of the configuration does.

  • model — Defines where the MCP server will be uploaded on the Clarifai platform:
    • id — Provide a unique identifier for the model (server)
    • app_id — Provide your Clarifai app where the server will reside
    • user_id — Provide your Clarifai user ID
    • model_type_id — Specifies the model type; use mcp for MCP servers
  • build_info — Specifies the Python version used to build the runtime environment. Note that Clarifai currently supports Python 3.11 and 3.12 (default).
  • inference_compute_info — Defines the compute resources allocated when the MCP server is running:
    • cpu_limit: 1000m — Allocates 1 CPU core
    • cpu_memory: 1Gi — Allocates 1 GB of RAM
    • num_accelerators: 0 — No hardware accelerators (e.g., GPUs), which is typical for MCP servers

    Note: Ensure your Clarifai compute cluster and nodepool meet these requirements before deploying the model.

  • mcp_server — Specifies how the MCP server process is started:
    • command — The executable used to launch the server (e.g., npx, uvx, python)
    • args — Arguments passed to the command
Examples of other MCP servers

To deploy a different MCP server, simply update the mcp_server section in config.yaml.

Note: Learn how to deploy it on Clarifai here.

mcp_server:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
  • Custom Python MCP Server (a custom, user-implemented MCP server)
mcp_server:
command: "python"
args: ["-m", "my_mcp_server"]

Step 4: Define Dependencies in requirements.txt

The requirements.txt file specifies all Python packages required by your MCP server. During deployment, Clarifai uses this file to automatically install the necessary dependencies, ensuring the server runs correctly in its runtime environment.

Here is the requirements.txt file for the custom model — or, in this case, the MCP server — we want to upload:

clarifai==12.1.7
anyio
mcp==1.26.0
fastmcp==2.14.5
requests>=2.31.0

Step 5: Define Dockerfile

The Dockerfile defines the container environment used to build and run the MCP server on the Clarifai platform.

When you upload an MCP server, Clarifai builds a Docker image from this file and uses it to execute your server in a secure, isolated container — exactly the same way custom models are deployed.

The Dockerfile is responsible for:

  • Selecting the base Python image
  • Installing system-level dependencies (if any)
  • Installing Python dependencies from requirements.txt
  • Copying your MCP server code into the container
  • Defining the entry point that starts the MCP server

Here is the Dockerfile file for the Browser MCP server:

# syntax=docker/dockerfile:1.13-labs

FROM --platform=$TARGETPLATFORM python:3.12-slim

COPY --link requirements.txt /home/nonroot/requirements.txt

# Update clarifai package so we always have latest protocol to the API. Everything should land in /venv
RUN ["pip", "install", "--no-cache-dir", "-r", "/home/nonroot/requirements.txt"]
RUN ["pip", "show", "--no-cache-dir", "clarifai"]

# Set the NUMBA cache dir to /tmp
# Set the TORCHINDUCTOR cache dir to /tmp
# The CLARIFAI* will be set by the templaing system.
ENV NUMBA_CACHE_DIR=/tmp/numba_cache \
TORCHINDUCTOR_CACHE_DIR=/tmp/torchinductor_cache \
HOME=/tmp \
DEBIAN_FRONTEND=noninteractive

#####
# Download checkpoints if config.yaml has checkpoints.when = "build"
COPY --link=true config.yaml /home/nonroot/main/
# RUN ["python", "-m", "clarifai.cli", "model", "download-checkpoints", "/home/nonroot/main", "--out_path", "/home/nonroot/main/1/checkpoints", "--stage", "build"]

#####
# Copy in the actual files like config.yaml, requirements.txt, and most importantly 1/model.py
# for the actual model.
# If checkpoints aren't downloaded since a checkpoints: block is not provided, then they will
# be in the build context and copied here as well.
COPY --link=true 1 /home/nonroot/main/1

# At this point we only need these for validation in the SDK.
COPY --link=true requirements.txt config.yaml /home/nonroot/main/

# Add the model directory to the python path.
ENV PYTHONPATH=${PYTHONPATH}:/home/nonroot/main \
CLARIFAI_PAT=${CLARIFAI_PAT} \
CLARIFAI_USER_ID=${CLARIFAI_USER_ID} \
CLARIFAI_RUNNER_ID=${CLARIFAI_RUNNER_ID} \
CLARIFAI_NODEPOOL_ID=${CLARIFAI_NODEPOOL_ID} \
CLARIFAI_COMPUTE_CLUSTER_ID=${CLARIFAI_COMPUTE_CLUSTER_ID} \
CLARIFAI_API_BASE=${CLARIFAI_API_BASE:-https://api.clarifai.com}

WORKDIR /home/nonroot/main

# Finally run the clarifai entrypoint to start the runner loop and local runner server.
# Note(zeiler): we may want to make this a clarifai CLI call.
ENTRYPOINT ["python", "-m", "clarifai.runners.server"]
CMD ["--model_path", "/home/nonroot/main"]
#############################

Step 6: Upload to Clarifai

To upload your open-source MCP server to the Clarifai platform, navigate to the server’s root directory and run:

clarifai model upload . --skip_dockerfile

The --skip_dockerfile flag prevents the CLI from generating a default Dockerfile and instructs it to use the Dockerfile provided in your project directory.

This command will:

  • Stream Docker build logs directly to your terminal for real-time monitoring and troubleshooting
  • Build the Docker image defined in your Dockerfile
  • Upload the MCP server to your Clarifai account
  • Make the server available for inference via the Clarifai HTTP API
Build Logs Example
clarifai model upload . --skip_dockerfile
[INFO] 10:41:18.121633 No checkpoints specified in the config file | thread=8416895168
[INFO] 10:41:18.403308 New model will be created at https://clarifai.com/alfrick/art-app/models/browser-mcp-server with it's first version. | thread=8416895168
Press Enter to continue...
[INFO] 10:41:22.772047 Added num_threads=16 to model version | thread=8416895168
[INFO] 10:41:24.068670 Uploading file... | thread=6146568192
[INFO] 10:41:24.069353 Upload complete! | thread=6146568192
[INFO] 10:41:24.666769 Created Model Version ID: 68db0b04a30c46f190270eee2674cd41 | thread=8416895168 est_id: sdk-python-12.2.0-16336010dcfb4c18aa69a78881dad684d684
[INFO] 10:41:24.667530 Full url to that version is: https://clarifai.com/alfrick/art-app/models/browser-mcp-server | thread=8416895168
[INFO] 10:41:30.012472 2026-02-18 07:41:25.632421 INFO: Downloading uploaded buildable from storage...
2026-02-18 07:41:26.417148 INFO: Done downloading buildable from storage
2026-02-18 07:41:26.421001 INFO: Extracting upload...
2026-02-18 07:41:26.425735 INFO: Done extracting upload
2026-02-18 07:41:26.428708 INFO: Parsing requirements file for buildable version ID ****0eee2674cd41
2026-02-18 07:41:26.454545 INFO: Dockerfile found at /shared/context/Dockerfile
cat: /shared/context/downloader/hf_token: No such file or directory
2026-02-18 07:41:27.138444 INFO: Setting up credentials
amazon-ecr-credential-helper
Version: 0.8.0
Git commit: ********
2026-02-18 07:41:27.143151 INFO: Building image...
#1 \[internal] load build definition from Dockerfile
#1 DONE 0.0s

#1 \[internal] load build definition from Dockerfile
#1 transferring dockerfile: 2.27kB done
#1 DONE 0.0s

#2 resolve image config for docker-image://docker.io/docker/dockerfile:1.13-labs
#2 DONE 0.2s

#3 docker-image://docker.io/docker/dockerfile:1.13-labs@sha256:************18b8
#3 resolve docker.io/docker/dockerfile:1.13-labs@sha256:************18b8 done
#3 CACHED

#4 \[internal] load metadata for docker.io/library/python:3.12-slim
#4 DONE 0.1s

#5 \[internal] load .dockerignore
#5 transferring context: 2B done
#5 DONE 0.0s

#6 \[internal] load build context
#6 transferring context: 854B done
#6 DONE 0.0s

#7 [1/7] FROM docker.io/library/python:3.12-slim@sha256:************9fab
#7 resolve docker.io/library/python:3.12-slim@sha256:************9fab done
#7 DONE 0.0s

#8 [2/7] COPY --link requirements.txt /home/nonroot/requirements.txt
#8 CACHED

#9 [3/7] RUN ["pip", "install", "--no-cache-dir", "-r", "/home/nonroot/requirements.txt"]
#9 CACHED

#10 [4/7] RUN ["pip", "show", "--no-cache-dir", "clarifai"]
#10 CACHED

#11 [5/7] COPY --link=true 1 /home/nonroot/main/1
#11 DONE 0.0s

#12 [6/7] COPY --link=true requirements.txt config.yaml /home/nonroot/main/
#12 merging
#12 merging 1.1s done
#12 DONE 1.1s

#13 [7/7] WORKDIR /home/nonroot/main
#13 DONE 0.0s

#14 \[auth] sharing credentials for 891377382885.dkr.ecr.us-east-1.amazonaws.com
#14 DONE 0.0s

#15 exporting to image
#15 exporting layers done
#15 exporting manifest sha256:************9dc8 done
#15 exporting config sha256:************8455 done
#15 pushing layers
#15 pushing layers 0.7s done
#15 pushing manifest for ****/prod/python:****0eee2674cd41@sha256:************9dc8
#15 pushing manifest for ****/prod/python:****0eee2674cd41@sha256:************9dc8 0.3s done
#15 DONE 1.0s
2026-02-18 07:41:29.611282 INFO: Done building image!!! | thread=8416895168
[INFO] 10:41:33.409202 Model build complete! | thread=8416895168
[INFO] 10:41:33.409785 Build time elapsed 8.7s) | thread=8416895168
[INFO] 10:41:33.409953 Check out the model at https://clarifai.com/alfrick/art-app/models/browser-mcp-server version: 68db0b04a30c46f190270eee2674cd41 | thread=8416895168
[INFO] 10:41:33.420923

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
# Here is a code snippet to use this model:
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
| thread=8416895168
[INFO] 10:41:33.421076
import asyncio
import os

from fastmcp import Client
from fastmcp.client.transports import StreamableHttpTransport

transport = StreamableHttpTransport(
url="https://api.clarifai.com/v2/ext/mcp/v1/users/alfrick/apps/art-app/models/browser-mcp-server",
headers={"Authorization": "Bearer " + os.environ["CLARIFAI_PAT"]},
)

async def main():
async with Client(transport) as client:
tools = await client.list_tools()
print(f"Available tools: {tools}")
# TODO: update the dictionary of arguments passed to call_tool to make sense for your MCP.
result = await client.call_tool(tools[0].name, {"a": 5, "b": 3})
print(f"Result: {result[0].text}")

if __name__ == "__main__":
asyncio.run(main())
| thread=8416895168
[INFO] 10:41:33.421142

XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
| thread=8416895168

🔶 Do you want to deploy the model? [Y/n]: y

🚀 Model Deployment

🖥️ Available Compute Clusters:
1. advanced-cluster-hpst – No description
Select compute cluster (number): 1

📦 Available Nodepools:
1. advanced-nodepool-n1v2 – No description
Select nodepool (number): 1

⌨️ Enter Deployment Configuration:
Enter deployment ID [deploy-browser-mcp-server-7e6f7a]: deploy1
Enter minimum replicas [1]: 1
Enter maximum replicas [5]: 5

⏳ Deploying model...
[INFO] 10:42:09.387973 Deployment with ID 'deploy1' is created:
code: SUCCESS
description: "Ok"
req_id: "sdk-python-12.2.0-09e3513b7e7a4ab9a8bf1e25b7adbf7f"
| thread=8416895168
✅ Deployment 'deploy1' successfully created for model 'browser-mcp-server' with version '68db0b04a30c46f190270eee2674cd41'.
Model deployed successfully! You can test it now.
note

Once the upload completes, the build logs will include an example code snippet that you can copy into your client.py script. This snippet contains the URL of your deployed MCP server, which your AI agents or client applications will use to communicate with the server.

The MCP server URL is constructed using the following format: https://api.clarifai.com/v2/ext/mcp/v1/users/{user-id}/apps/{app-id}/models/{model-id}.

Step 7: Deploy the Model

After uploading the MCP server, you must deploy it to a dedicated compute cluster and nodepool. Deployment provisions the compute resources required to run the server and handle incoming requests.

Learn how to perform deployments here.

Note: You can also deploy the server by following the interactive prompts shown when uploading it to Clarifai from the terminal.

Once deployed, the server is automatically referenced during inference. You do not need to specify the deployment explicitly in your client code.

Step 8: Interact with the Server

Once the open-source MCP server is deployed, you can create a client script to communicate with it and invoke the exposed MCP tools. This allows agentic models to discover and use the server’s capabilities programmatically.

Use with FastMCP Client

Here is an example of interacting with the MCP server directly using the FastMCP client without going through an LLM. This is typically the first step before invoking tools, executing actions, or wiring the MCP server into an agentic LLM workflow.

Note: Remember to replace the placeholder values with the corresponding identifiers for your MCP server. You may also use any compatible agentic LLM.

import asyncio
import os
from fastmcp import Client
from fastmcp.client.transports import StreamableHttpTransport

transport = StreamableHttpTransport(
# Replace placeholders with actual values
url="https://api.clarifai.com/v2/ext/mcp/v1/users/{user_id}/apps/{app_id}/models/{model_id}",
headers={
"Authorization": f"Bearer {os.environ['CLARIFAI_PAT']}",
},
)

async def main():
async with Client(transport) as client:
tools = await client.list_tools()
print("Available tools:")
for tool in tools:
print(f"- {tool.name}")

result = await client.call_tool(
"search",
{
"query": "Clarifai MCP FastMCP example",
"max_results": 5,
},
)

print("\nSearch result:")
print(result.content[0].text)


if __name__ == "__main__":
asyncio.run(main())
Example Output
Available tools:
- search
- fetch_content

Search result:
Found 5 search results:

1. MCP | Clarifai Docs
URL: https://docs.clarifai.com/compute/agents/mcp/
Summary: MCPBuild performantMCPservers withFastMCPforClarifaiThe Model Context Protocol (MCP) is an open standard developed by Anthropic that acts as a universal language for AI models, particularly large language models (LLMs), to interact with external data sources (like GitHub, Slack, or databases) and extend their capabilities.

2. Building and Deploying a Custom MCP Server: A Practical Guide
URL: https://digitalthoughtdisruption.com/2025/08/14/build-custom-mcp-server-fastmcp-clarifai/
Summary: Learn how to build and deploy a customMCPserver withFastMCPandClarifai. This step-by-step guide covers configuration, tool design, testing, and scaling for seamless AI integration.

3. How to Build Your First MCP Server using FastMCP
URL: https://www.freecodecamp.org/news/how-to-build-your-first-mcp-server-using-fastmcp/
Summary: Deploying YourMCPServer You can deployFastMCPservers anywhere. For testing, thefastmcprun command is enough. For production, you can deploy toFastMCPCloud, which provides instant HTTPS endpoints and built-in authentication. If you prefer to self-host, use the HTTP or SSE transport to serve yourMCPendpoints from your own infrastructure.

4. FastMCP Builder - Claude Code Skill - GitHub
URL: https://github.com/husniadil/fastmcp-builder
Summary: A comprehensive Claude Code skill for building production-readyMCP(Model Context Protocol) servers using theFastMCPPython framework. This skill provides complete reference implementations, workingexamples, and proven patterns for creating robustMCPservers with tools, resources, prompts, OAuth authentication, and comprehensive testing.

5. Building Your Own MCP Server with FastMCP - Medium
URL: https://medium.com/infinitgraph/building-your-own-mcp-server-with-fastmcp-7d9cc73a1062
Summary: Building Your OwnMCPServer withFastMCPIn our last article, we unpacked the Model Context Protocol; we highlighted how it acts as a USB-C port for AI applications, and how it provides a …

The above snippet demonstrates how to:

  • Authenticate to an MCP server hosted on Clarifai
  • Establish an asynchronous, stream-capable MCP connection
  • Discover the tools and their capabilities as exposed by that MCP server using the client.list_tools() method. This is a key MCP concept: models and agents can dynamically discover what tools are available at runtime, rather than hard-coding them.

Note: The DuckDuckGo MCP server provides various tools for web search and information retrieval. For example, ddg_search is used for searching the web using DuckDuckGo.

note

If you encounter a Server error '503 Service Unavailable' while calling the server, it typically indicates that the model is in a cold state and still warming up. You may wait a moment before trying the request again.

Integrate with LLMs

Here is an example of how to bridge an MCP server with an agentic LLM on Clarifai so the model can discover tools, decide when to use them, and call them during inference — all through Clarifai’s OpenAI-compatible endpoint.

import asyncio
import os
import json
from openai import AsyncOpenAI
from fastmcp import Client
from fastmcp.client.transports import StreamableHttpTransport

# MCP client setup
transport = StreamableHttpTransport(
# Replace placeholders with actual values
url="https://api.clarifai.com/v2/ext/mcp/v1/users/{user_id}/apps/{app_id}/models/{model_id}",
headers={
"Authorization": f"Bearer {os.environ['CLARIFAI_PAT']}",
},
)

# OpenAI-compatible client (Clarifai)
openai_client = AsyncOpenAI(
api_key=os.environ["CLARIFAI_PAT"],
base_url="https://api.clarifai.com/v2/ext/openai/v1",
)

def format_tools_to_openai_function(tools):
# Convert MCP tools to OpenAI function format
return [
{
"type": "function",
"function": {
"name": tool.name,
"description": tool.description or "",
"parameters": tool.inputSchema,
},
}
for tool in tools
]

async def main():
# Discover MCP tools
async with Client(transport) as mcp_client:
tools_raw = await mcp_client.list_tools()
tools = format_tools_to_openai_function(tools_raw)

# First LLM call
response = await openai_client.chat.completions.create(
model="https://clarifai.com/qwen/qwenLM/models/Qwen3-30B-A3B-Instruct-2507",
messages=[
{
"role": "user",
"content": "What are the latest developments in artificial intelligence?",
}
],
tools=tools,
tool_choice="auto",
)

message = response.choices[0].message

# If the model calls a tool
if message.tool_calls:
tool_call = message.tool_calls[0]
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)

# Execute tool via MCP
tool_result = await mcp_client.call_tool(
tool_name,
tool_args,
)

tool_output = tool_result.content[0].text

# Second LLM call with tool result
final_response = await openai_client.chat.completions.create(
model="https://clarifai.com/qwen/qwenLM/models/Qwen3-30B-A3B-Instruct-2507",
messages=[
{
"role": "user",
"content": "What are the latest developments in artificial intelligence?",
},
message,
{
"role": "tool",
"tool_call_id": tool_call.id,
"content": tool_output,
},
],
)

print("\nFinal answer:\n")
print(final_response.choices[0].message.content)

else:
# No tool call; print model response directly
print("\nAnswer:\n")
print(message.content)


if __name__ == "__main__":
asyncio.run(main())

The above snippet demonstrates how to:

  • Connect to a Clarifai-hosted MCP server and discover its available tools
  • Convert MCP tools into OpenAI function-calling–compatible definitions
  • Send a chat completion request to an agentic LLM using Clarifai’s OpenAI-compatible API
  • Allow the model to autonomously decide whether to invoke a tool (tool_choice="auto")
  • Execute any requested tool calls via MCP and return the results to the model for final response generation
tip

Click here to learn more about integrating MCP servers with LLMs and running inferences on the Clarifai platform.