Skip to main content

Upcoming API Changes

Learn about scheduled API changes

Here is a list of changes to the API that we want you to be aware of well in advance as they may affect how you use Clarifai's platform. These changes include scheduled downtime and other improvements in stability, performance or functionality of the Clarifai platform in order to better serve you as a customer.

Some of these changes may not be backward compatible and thus require you to update how you call our APIs. We created this page with the mindset of being as transparent as possible so you can plan any corresponding changes in advance and minimize any interruptions to your usage of Clarifai.

The dates listed in the following tables are the date we plan to make the change. We may actually make the change in the days following the specified date. However, to be safe, your client-side code needs updating before that date to minimize any downtime to your applications.

We will continue to update this page regularly, so a good way to always stay up to date is to watch our documentation repo on GitHub.

Upcoming Changes


Completed Changes

Changes to the Use of PATs and API Keys

March 30th 2023(Breaking change)

Critical changes to the use of PATs and API keys
An upcoming version of Clarifai’s API, 9.3, will significantly change how personal access tokens (PATs) and API keys work. We plan to implement this change on March 30, 2023, providing 45 days to change the way your applications authenticate on our platform.

Terminology: If any of the terms used here are unfamiliar, you can check them in our glossary. Specifically, we mention models, workflows, public, private, collaboration, organization, and community.

Why are we making this change? With PATs you can access resources for which you’re a collaborator or teammate. You can also access public content shared by any user, in addition to all your private content across all of your apps. This simplifies the use of all the resources you have access to unlike using API keys that are restricted to a single application. PAT provides a consistent, secure, and robust authentication method. Finally, for Enterprise clients, Org functionality is PAT only and this change creates a consistent method of authentication across the platform.

What is changing? Previously, you could use API keys to access any model, concept, or workflow owned by the app scoped to the API key, as well as those owned by the user clarifai in the application main. Now, accessing models or workflows owned by clarifai in the application main can only be done with a PAT tied to your account. To be specific:

  • You must now use PATs to make API calls for resources that are outside the scope of your apps, such as Clarifai’s models and workflows. While using a PAT, you must also specify the USER_ID of the application owner, and the APP_ID of the application that you’re accessing. The legacy behavior allowed you to use the USER_ID and APP_ID of any application on the platform to access Clarifai models and workflows in the app "main". This change requires you to specify the USER_ID (clarifai) and APP_ID (main) associated with the application containing the resource (model, concept, workflow, etc).
  • You will no longer be able to use API keys to access resources outside the application the API key is created in. With a key, there is no need to specify the user_id or the app_id as they are already part of the key. API keys will function as normal when accessing resources within the application the key is created in, but will no longer allow access to resources owned by the user clarifai in the application main.
  • Since workflows are a collection of models, some of which may be references to models that are not in the same application as the workflow itself, you should also use PATs to interact with workflows. While API keys will still work for the time being for workflows in the same app as the API key that contains only models from that same app, this will be a very narrow use of workflows. Therefore, we recommend updating your code to use PATs when using workflows too.
  • The preferred method for accessing the Clarifai API moving forward is with a PAT. To avoid potential future breakage we recommend using a PAT. Of course, we will provide prior notice if additional behavior is going to change for API keys.
We hope and expect that this will not be a significant change for you. In order to implement it, you will need to ensure that you set the PAT, USER_ID, and APP_ID variables appropriately. There are examples using all of our supported languages on this page, and we are available at any time if you need assistance or have any questions. The best place to contact us for support questions is our Community Slack, which is monitored by many of our support teams and is the fastest way to get help.

We do apologize for any inconvenience this causes, however, we are confident that this is a positive change that will simplify the usage of the platform going forward and make it easier to leverage AI created by other people on our platform!

Thank you for your understanding and please feel free to reach out for any help.

Deprecation of closed_environment

January 26th 2023Deprecation of closed_environment in Favor of enrich_dataset For Creating Embedding-Classifier ModelsWhen using the PostModels endpoint to create a custom embedding-classifier model, you could include the closed_environment variable, as part of the modelVersion.OutputInfo.OutputConfig struct.

The variable accepted a Boolean value and specified whether a pre-stored dataset, of (usually) negative embeddings, should be added to the training process of your model. This generally leads to higher model accuracy without any additional effort on your end.

  • If closed_environment was set to False, which was the default action, we would try to use additional negative embeddings during the training process. However, the default action would fail if the underlying base model did not have negative embeddings.
  • If it was set to True, it meant that the user wanted a closed environment for the training and therefore we did not add additional negative embeddings. This worked for all embedding models.
We plan to replace it with enrich_dataset that is specified inside modelVersion.TrainInfo.Params when creating embedding-classifiers, which is the only type of model that supports it.

The enrich_dataset variable will be implemented as an ENUM instead of a BOOL so that it can have two options: Automatic (default) and Disabled.

  • Automatic means that if there are negative embeddings for a base model, we will use them—and we won’t use them if they’re not available. So, the training will not fail if the underlying embeddings do not have negative embeddings.
  • Disabled means that we should not use the negative embeddings whether they are available or not.
That way, enrich_dataset fixes the problem with closed_environment. Previously, setting the closed_environment variable to False (the default value) would fail if the base model didn’t have the negatives for it.

This change will also affect the PostModelVersions endpoint.

Updates to Model and Model Version Endpoints

January 20, 2023Critical Updates to Model and Model Version Endpoints

Old Behavior

  • Previously, using the PostModels endpoint to create a new model also created a placeholder version of the model with user-provided fields. And if the model_type_id of the model was trainable, then a new ModelVersion was created with UNTRAINED status by default. Otherwise, if the model_type_id was not trainable, then a new ModelVersion was created with TRAINED status.
  • Modifying a model's config settings requires using the PatchModels endpoint. It's how you previously changed the info fields, descriptions, notes, metadata for both models and model versions. If you were only patching fields that are informational about the model, and not the model version, a model version was not created. If you were patching a trainable model where the latest model version was trained, and you were only changing the output_info, a new trained model version was created with the new info. Otherwise, if you were patching a trainable model where the latest model version had not been trained, the created model version was marked as untrained by default. If you were patching an untrainable model type, the new created model version was marked as trained.
  • Previously, using the PostModelVersions endpoint automatically, by default, kicked off training the latest untrained model version—even though a user may not intend to train the latest version, which could unnecessarily incur training costs.
  • Previously, using the PatchModelVersions endpoint only patched a model versions' visibility, metadata, license, or description—while maintaining the model version's status.

New Behavior

  • PostModels will create new models but not create new model versions. This means trainable models that have not yet been trained will require the additional step of calling PostModelVersions—while providing the *_info fields in the model version—to effect training.
  • PostModelVersions will allow users to give information specific to a model version. All the *_info fields—such as output_info, input_info, train_info, and import_info—will be migrated to the endpoint. This would minimize the confusion and difficulty of maintaining these endpoints. Users will be able patch model specific fields without worrying about model version fields being affected.
  • PatchModels will allow users to patch only the model level fields, nothing in the model version. Unnecessary model versions will no longer be created. This allows users to easily track persisted versions.
  • PatchModelVersions will be the new way to change most of the model version fields like gettable, metadata, license, description, notes, and output_info (not including concepts).
  • If users used model.output_info.output_config when inferencing, they will have to change that to model.model_version.output_info.output_config.

Changes to PostModelOutputs and PostWorkflowResults Responses

January 4th 2023Exclusion of Some Fields From PostModelOutputs and PostWorkflowResults Prediction Responses
  • When using the PostModelOutputs endpoint or the PostWorkflowResults endpoint to make a prediction call, the entire model information, including all hyperparameters, is included for each output in the response. This is extremely verbose and also unnecessary, as the same information appears repeatedly throughout the response. It also impacts network usage, ease of viewing and processing the results and debugging by the user, and other performances.
  • Model description, notes and related model info fields are to be excluded from PostModelOutputs and PostWorkflowResults prediction responses. The model and model version ids are still available in the responses. If you need more model info than that available from any of the responses, you can look up the info by model id using the GetModel endpoint.

Other Previous Changes

November 22, 2022Deprecation of POST /searchesThe generic search API will be deprecated in favor of POST /inputs/searches and POST /annotations/searches. POST /searches will still be supported for now, but will not receive any feature updates so users are suggested to use the newer search endpoints.
January 20, 2022Deprecation of name and display_nameTo make Clarifai Model IDs more readable and user friendly, we plan to make the following API/UI changes during the week of Jan 17th. Please see user impact and suggestions below and contact if you have any questions.

The old user_unique_id will still be usable in all queries, but the responses will be filled with the new v2_user_unique_id. name and display_name are deprecated in the API and UI, and user_unique_id will soon be deprecated as well so users are suggested to use new model id field v2_user_unique_id.
November 24, 2021. 9:00am ETDeprecation of type optionThe type option in POST /models and /models/searches reuest will no longer be supported and will be removed from our API after this point in time. model_type_id is in use for model type references.
February 12, 2021. 9:00am ETDeprecation of delete_all optionThe delete_all option in DELETE /inputs request will no longer be supported and will be removed from our API after this point in time. You can delete inputs by id. Each request can have at most 128 ids.
October 16, 2020. 9:00am ETDeprecation of Demographics ModelTo reduce the risk of race bias in our own models, we have constructed a new approach to visual recognition of race. We've also divided age, race and gender recognition into separate models, and then packaged the models into a new public Demographics Workflow. This new approach provides much more flexibility, and makes outputs easier to parse. We will be retiring the current demographics model on October 16th, 2020. Please reference this blog post, and our API documentation for more information about how you can update your code to take advantage of the new workflow.
October 20, 2020. 9:00am ETModel Training Do Not Wait For Inputs To Be ProcessedCurrently, when we train a context-based classifier model, we wait for all inputs to be added to your app before a model version is created and processed, with a 1 hour training timeout. In the future, we will use any available inputs and annotations that are available at the time a model version is created for training. If the input is pending or in progress, those inputs and associated annotations will not be used for training. You can use to check input counts for each status.
February 27, 2020. 9:00am ETDeprecation of Face object from APIThe Face object in our API responses will be deprecated in favor of a list of Concepts that other model types return. This should only effect users of the Celebrity, Demographics, or custom face recognition models where the data.face attributes like data.face.identity, data.face.age_appearance, data.face.gender_appearance, and data.face.multicultural_appearance will now be returned in the list of data.concepts Concept object. The API will return both for a while during the transition to give you time to update your code away from using the data.face objects altogether. We are doing this to simplify the API interface and make it more easily compatible for advanced functionality that is coming soon in workflows! The custom face recognition and celebrity models are a simple change to just access the new data.concepts field, but the demographics model is a more fundamental change away from having three distinct lists of concept to a single list. In order to cope with this, we have introduced a vocab_id field in each data.concepts entry that is returned by the demographics model so that you can distinguish age_appearance, gender_appearance and multicultural_appearance.To convert new format to old format, check python example here..
February 24, 2020. 9:00am ETConsolidation of Input Related Status CodesAs we support more media types, it is impractical to have status codes for each. Thus status codes will now be prefixed INPUT_... rather than INPUT_IMAGE_... or INPUT_VIDEO_.... We will maintain the int value for the INPUT_IMAGE_... prefixed statuses, but no longer support the int values associated with statuses prefixed INPUT_VIDEO....
February 12, 2020. 9:00am ETDeprecation of Face model type namesThe facedetect* model types will be deprecated in favor of their more general detect* counterparts. For example these would be the changes of model type: facedetect -> detect facedetect-identity -> detect-concept facedetect-demographics -> detect-concept facedetect-embed -> detect-embed This change is to unify the APIs around face products and object detection products so that they are compatible everywhere either is used.
February 3, 2020. 9:00am ETPATCH /inputs overwrite action changeThe overwrite action when patching inputs currently has some inconsistent behavior. If you patch or fields on an input that has already added to it, these concepts will remain after the patch even though the patch action was overwrite. Going forward, the overwrite behavior will overwrite the entire data object with what is included in the PATCH /inputs API call. Therefore if concepts are not provided in the patch call, but were originally on that input, they will be erased (overwritten with an empty list of concepts). You can maintain the current behvaiour by always sending back the complete data object from GET /input/{input_id} along with any modification to it if you are using the overwrite action. Update: this change has become more complicated than originally expected and we may not undergo it after all, more to come in future. Still a good idea to update your PATCH calls to use the merge or remove actions instead of overwrite due to overwrite's inconsistency.
February 1, 2020. 9:00am ETDeprecation of Focus ModelThe Focus model will no longer be supported and will be removed from our API after this point in time. If you have requests for recognizing focus and blurry regions within images please contact so that we can help you directly.
November 20, 2019. 9:00am ETimage.crop argument will be deprecatedIn some requests we used to allow cropping of images during the request using the image.crop field. This was for convenience only, but in reality is was rarely ever used and significantly complicates the processing pipelines under the hood. Therefore, we will no longer support the image.crop field in any requests that used to accept it. If you want to have similar behavior please crop the images on the client side and send the cropped bytes as base64 encoded image data.
September 30, 2019. 5:00pm ETDELETE /inputs will only operate asynchronouslyAlong the same lines as POST /inputs becoming completely asynchronous, we are cleaning up some inconsistent behavior in the API for deleting inputs. Previously, when a single image is deleted with DELETE /inputs or DELETE /inputs/{input_id} it was a synchronous operation, but when a batch of images were deleted it was asynchronous. We are making both asynchronous. This allows us to provide more advanced functionality with workflows that index your inputs. What this means for your code is if you application relies on the input having been deleted when the DELETE /inputs or DELETE /inputs/{input_id} calls return, you now need to add a second call to GET /inputs/{input_id} in order to check that it fails with a not found error.
September 24, 2019. 5:00pm ETPOST /inputs will only operate asynchronouslyWe are cleaning up some inconsistent behavior in the API where a single image added with POST /inputs was a synchronous operation, but a batch of images was asynchronous. We are making both asynchronous. This allows us to provide more advanced functionality with workflows that index your inputs. What this means for your code is if you application relies on added inputs having already been indexed when the POST /inputs call returns, you now need to add a second call to GET /inputs/{input_id} in order to check the status of the input you just added to look for 30000 (INPUT_IMAGE_DOWNLOAD_SUCCESS) status code.
September 11, 2019. 9:00am ETScheduled Database DowntimeWe plan to upgrade our database to make it faster and provide more space for your applications. We expect a few minutes of downtime during this upgrade but you should plan for up to an hour of downtime in case things don't go as expected. This will primarily affect the following uses of our platform: POST/GET/PATCH/DELETE inputs, Search, Custom Training, Model Evaluation