Skip to main content

Release 12.4

Release Date: May 7th, 2026



New FeatureImprovementBug FixEnterprise Only
new-featureimprovementbugenterprise

Published Models

StatusChangeDetails
new-featurePublished new models
  • Published Nemotron Nano V3 Omni (30B total / 3B active — MoE). It’s NVIDIA's multimodal LLM that unifies video, audio, image, and text understanding in a single model, with integrated reasoning support.
  • Published Qwen3.6-35B-A3B, an efficient MoE LLM with 35B total but only 3B active parameters, delivering near-large-model performance at much lower compute cost.

Cached Prompt Tokens

StatusChangeDetails
new-featureIntroduced cached prompt tokens in model responses
  • Model responses now include cached prompt token counts, providing accurate token usage reporting when prompt caching is active.
  • Learn more here.

Pipelines

StatusChangeDetails
new-featureIntroduced code-first Pipeline DSL with CLI support
  • Introduced a code-first Pipeline DSL that lets you define pipelines programmatically in Python and generate configs via CLI.
  • Use clarifai pipeline init to scaffold the pipeline config and clarifai pipeline upload to deploy it to the platform.
  • Learn more here.
new-featureclarifai pipeline run --dev for local pipeline development
  • New --dev flag on clarifai pipeline run enables a local development loop, letting you iterate on pipeline steps without deploying to the cloud.
  • See PR #1012 for details.
new-featureclarifai pipeline local-run for Docker-based step testing
  • New clarifai pipeline local-run command runs individual pipeline steps locally inside Docker containers, matching the production runtime environment before you deploy.
new-featureAuto-create compute resources via --instance flag
  • clarifai pipeline run now accepts an --instance flag that automatically creates the required compute cluster and nodepool if they don't already exist.
improvementImproved clarifai pipeline init UX and help text
  • Clearer help text, improved prompts, and a post-init next-steps message now guide users through the full pipeline setup flow after running clarifai pipeline init.

Model Deployment

StatusChangeDetails
improvementDisabled deploy_latest_version for clarifai model serve
  • clarifai model serve no longer automatically promotes a newly uploaded version as the live deployed version, giving you explicit control over which version is active.

Local Runners

StatusChangeDetails
improvementLocal runner defaults to PRIVATE; --public flag patches all visibilities
  • Models and resources created via the local runner are now private by default.
  • Pass the --public flag to make all associated resources public in a single command.

Bug Fixes (Python SDK)

StatusChangeDetails
bugFixed deployment worker pinning on clarifai model serve
  • Re-pins the deployment's desired_worker to the current model version when running clarifai model serve, preventing stale version references after serving.
bugFixed Hugging Face private repo access validation
  • Corrected access validation for private Hugging Face repos that return a not_found response to anonymous requests, eliminating false access errors on valid private repos.
bugLoosened pinned requirements and fixed Clarifai package detection
  • Relaxed overly strict version pins in the SDK's dependencies and fixed a bug in detecting the installed Clarifai package version.
bugFixed User.app() returning empty values
  • User.app() now correctly returns actual server-side app data instead of empty placeholder values.