Creating and Managing Model Versions
Learn about creating and managing model versions
Developing performant machine learning models requires a lot of iterative work. To get the best performing model, you may need to change hyperparameters, training data, or other parameters.
Keeping a history of the changes over time can help you achieve the objectives you initially envisioned with your machine learning models.
The Clarifai Portal allows you to track and manage different versions of your model. Using the Portal to practice model version control can help you achieve several things, such as:
- Versioned reproducibility — As you make changes to your model, its behavior also changes. By tracking versions, you can easily reproduce the same behavior later.
- Better collaboration — In a team, version control helps team members avoid conflicts, track changes, and collaborate effectively.
- Improved troubleshooting — After updating or modifying a model, it can be difficult to trace the changes that affect its performance. Tracking model versions allows you to easily compare different versions and pinpoint the changes that resulted in problems or improvements.
- Regulatory compliance — By tracking model versions, you can demonstrate that a particular model was used for decision making. This transparency can enhance the auditability of your systems and assist in satisfying regulatory requirements.
Model Versions Table
After creating and training a custom model, it will be listed on the Models management page, where you can see all the models available in your app.
Training a model automatically creates a new version for it.
You can get to the page by navigating to the individual page of your app and selecting the Models option on the collapsible left sidebar.
Select the model you want to see its details.
You'll be redirected to a page for viewing the selected model.
-
Click the Create New Model Version button at the upper-right corner of the page to initiate a new training process, which will generate another model version.
-
Click the drop-down button to list the available model versions. This allows you to select the version of the model you want to use for inferencing. You can also copy its ID to the clipboard.
-
Select the Versions tab to unveil a table that displays the available versions of your model.
-
Click the sort button in the model versions table to organize the listings alphabetically, choosing between ascending order (A to Z) or descending order (Z to A).
-
Click the designated area in the model versions table to add a brief description of the model version.
Anytime your model finishes training, you'll get an email message with the training results and links to helpful resources on how to make the most of the model.
The model versions table allows you to complete various management tasks.
Cancel training and view training logs
You can begin a new training process by clicking the Create New Model Version button, and following the ensuing prompts to complete the training.
If you want to halt an ongoing training session, click the Cancel training button located in the Status column. You can also access the training logs by clicking the View Training Log button to review the details of the process.
View training dataset
The Training Dataset column allows you to access the specifics of the dataset used for training a model version. If you click a link within the column, you will be redirected to a page containing comprehensive details about the dataset.
Evaluation Dataset
The Evaluation Dataset column allows you to select a dataset for assessing the performance of your model version. If you click the field, a drop-down list emerges, enabling you to select a dataset version already used for evaluation.
This selection can include datasets within your current application or those from another application under your ownership, facilitating cross-app evaluation.
Also, if you click the Evaluate with a new Dataset button, a small window will pop up. Within this window, you can choose a new dataset that hasn't been evaluated before, along with its version, for conducting the evaluation.
If no dataset version is selected, the latest version will be automatically used. The pop-up also allows you to create a new dataset for the evaluation.
- Cross-app evaluation refers to evaluating the performance of a model version using datasets from different applications you own. This means that you can assess how well your model performs across various contexts or use cases by leveraging datasets from separate applications within your ownership.
- The model versions table currently supports cross-app evaluation for a wide range of model types, including visual classifiers, visual detectors, text classifiers, transfer learning models, and fine-tuned LLMs.
Evaluate a model version performance
To evaluate the performance of a model version, start by selecting the dataset you want to use for the evaluation — as explained earlier.
Next, click the Calculate button in the ROC column, which will start the evaluation process.
The evaluation may take up to 30 minutes. Once complete, the Calculate button will become a View Results button, which you can click to see the evaluation results.
You can read here to learn how to interpret the evaluation results.
You can compare model performance metrics across different versions. For example, if you want to make an "apples to apples" comparison, we recommend having a fixed concept list for your model, so that you can pick the best model version after the comparisons.
See model version details
Click the See Model Version Details button located in the Actions column to see the details of the model version.
You'll be redirected to the model version viewer page, where you can see different details of your versions and track their history.
The selection pane on the left side of the page allows you to choose the version you want to see its details. Your selected version will be highlighted. By default, the latest trained model version is highlighted and appears at the top.
You can see several details, including the following:
- Dataset details used to train the model version
- Concepts present in the model version
- Training template used to train the model version
- Inference settings, such as batch size, image size, and number of epochs
- Different advanced options
- Different output settings
You can also click the Create a new version button to create a new version of your model.
You can create a new model version either from:
- From Version Config — creates a new version from your own model configuration using the settings of the highlighted selected version
- From Default Config — creates a blank new version from Clarifai's default configuration settings
Other management tasks
The Created Date column allows you to see the precise date and time when the model version was created.
Besides the See Model Version Details option in the Actions column, you can also:
-
Download the logs used for training
-
Copy the model version ID
-
Delete the model version