Skip to main content

How to Evaluate An Image Classification Model

by Minhajul Hoque


Image Classification Example
Image Classification Example

What is an Image Classification Model?

An image classification model is a type of computer program that has learned to recognize and sort images into different categories or labels.

However, it’s different from a detection model, which can identify where specific objects are located within an image. Instead, a classification model only tells you whether it believes a certain category or label is present in the image, but not precisely where in the image it is located.

What Metrics Can We Use?

When we’re trying to determine how well an image classification model is performing, there are different ways to measure its success. In this article, we’ll explore a few of the most commonly used metrics in the industry and look at the advantages and disadvantages of each. This will help us better understand how to assess the model’s performance.

Accuracy

Accuracy is like a teacher grading a multiple-choice test. If you get most of the questions right, your grade is high, and if you get most of them wrong, your grade is low.

In the world of image classification, the “student” is the model, and the “questions” are the images. The model looks at the image and predicts what it is — if it gets it right, it gets a point, and if it gets it wrong, it gets a big red X.

So, accuracy is all about counting up the number of points and dividing it by the total number of questions to get a score between 0 and 1. If your model gets a high accuracy score, it means it’s acing the test and making mostly correct predictions. But if it gets a low accuracy score, it means it’s failing the test and making mostly incorrect predictions.

Accuracy Formula
Accuracy Formula

However, like a bad student, the model can “cheat”. Accuracy can be misleading when working with imbalanced datasets, where the number of samples in each class is significantly different. For example, if you are training a spam classifier on a dataset with 99% non-spam and 1% spam emails, a model that always predicts non-spam will have a high accuracy of 99%. This can give a false impression of a well-performing model. Therefore, in such cases, it’s important to consider other metrics to better evaluate the performance of the model.

Precision & Recall

Precision and recall are like Batman and the Joker, but for computer vision metrics. They are constantly fighting for supremacy, and when one wins, the other loses. It’s like a game of tug-of-war, except the rope is your data and the two teams are precision and recall.

Precision & Recall Formulas
Precision & Recall Formulas

Precision is like a sniper rifle — if your model is precise, it hits the target with accuracy, and you won’t end up shooting innocent bystanders (many false positives). However, if your model is not precise, it’s like using a shotgun — you’ll hit everything in sight, including things you didn’t mean to hit. In use cases like spam classification, you want to be like the sniper and have high precision, so that you don’t end up marking important emails as spam.

Recall is like a treasure hunter — if your model has high recall, it’s able to find all the hidden gems and bring them to the surface (find all the true positives). However, if your model has low recall, it’s like a forgetful pirate who misses half of the treasure. In use cases like cancer classification, you want to be like the treasure hunter and have high recall, so that you don’t miss any cases where cancer is present.

Precision-Recall Curve

Imagine you’re trying to catch fish with a net. The precision-recall curve is like trying to find the perfect size of the net so that you catch the most fish without letting any escape.

The net’s size represents the threshold for making predictions. If you have a really small net (high threshold), you’ll catch very few fish, but you can be pretty sure that the ones you catch are the right ones. This is high precision, but low recall. On the other hand, if you have a really big net (low threshold), you’ll catch a lot of fish, but you’ll also catch a lot of other things that aren’t fish. This is high recall, but low precision.

So, the precision-recall curve is all about finding the sweet spot — the perfect size of the net that catches as many fish as possible without catching any non-fish items. It’s a trade-off between precision and recall, and it helps you visualize the performance of your model across different thresholds.

Precision-Recall Curve
Precision-Recall Curve

F1-Score

The F1 score is like an average grade for your model, but instead of getting an A or a B, you get a number between 0 and 1 that tells you how well your model is doing.

Think of it like a pizza party. If you have a bunch of friends who like different toppings, you want to order a pizza that everyone will like. The F1 score is like trying to find the perfect pizza that everyone will enjoy.

The “F” stands for “f-measure,” which is a combination of precision and recall. Precision is like making sure the pizza only has the toppings your friends like, while recall is like making sure you don’t forget any of the toppings that they like. The F1 score combines these two measures into one metric to give you an overall idea of how well your model is performing.

F1-Score Formula
F1-Score Formula

ROC Curve

The ROC (Receiver Operating Characteristic) probability curve is a tool that helps us evaluate the performance of binary classification models. Think of it like a musical performance — the ROC curve tells us how well our model is singing its tune.

In a ROC curve, we plot the True Positive Rate (TPR) against the False Positive Rate (FPR) at various thresholds. The TPR measures how many of the positive cases our model correctly identifies, while the FPR measures how many of the negative cases are falsely identified as positive. Essentially, the ROC curve is a plot of the model’s sensitivity (how well it detects the positive cases) versus its specificity (how well it rejects the negative cases).

ROC Curve Example
ROC Curve Example

ROC curve is not necessarily a metric, but it is a useful tool to evaluate your model and to calculate the AUC score. We can plot the ROC curve for a multi-class use case by plotting N number of ROC curves using the One vs All method.

AUC Score

The AUC, or area under the ROC curve, is a metric used to measure the performance of a binary classifier, such as a spam filter or fraud detector. It’s a numerical value between 0 and 1 that represents the overall performance of the classifier and its degree of separability, where 1 means the classifier is perfect at distinguishing between two classes, and 0.5 means it’s no better than a coin flip.

Think of it like a doctor who is trying to diagnose patients with a particular disease. A high AUC score means the doctor is doing a great job of distinguishing between patients who have the disease and those who don’t. Just like a good doctor, a good model with a high AUC score can accurately diagnose positive and negative samples and help make the right decisions based on that diagnosis.

ROC and AUC Example
ROC and AUC Example

Confusion Matrix

A confusion matrix is a useful representation of your model results. You can think of it as a report card for your model’s performance. It lets you see where your model is struggling and where it’s acing the test. Imagine you’re a teacher grading a class of students. You might notice that some students confuse the concepts of multiplication and division, just like how our model is confusing dogs and cats in the CIFAR10 dataset. It’s important to identify these confusions to help our model learn and improve.

Confusion Matrix for The CIFAR10 Dataset
Confusion Matrix for The CIFAR10 Dataset

It is not necessarily a metric, but a useful tool that many data scientists use.