Authors Vladimir Haltakov

7 days 30 days All time Recent Popular
Let's talk about a common problem in ML - imbalanced data βš–οΈ

Imagine we want to detect all pixels belonging to a traffic light from a self-driving car's camera. We train a model with 99.88% performance. Pretty cool, right?

Actually, this model is useless ❌

Let me explain πŸ‘‡


The problem is the data is severely imbalanced - the ratio between traffic light pixels and background pixels is 800:1.

If we don't take any measures, our model will learn to classify each pixel as background giving us 99.88% accuracy. But it's useless!

What can we do? πŸ‘‡

Let me tell you about 3 ways of dealing with imbalanced data:

β–ͺ️ Choose the right evaluation metric
β–ͺ️ Undersampling your dataset
β–ͺ️ Oversampling your dataset
β–ͺ️ Adapting the loss

Let's dive in πŸ‘‡

1️⃣ Evaluation metrics

Looking at the overall accuracy is a very bad idea when dealing with imbalanced data. There are other measures that are much better suited:
β–ͺ️ Precision
β–ͺ️ Recall
β–ͺ️ F1 score

I wrote a whole thread on


2️⃣ Undersampling

The idea is to throw away samples of the overrepresented classes.

One way to do this is to randomly throw away samples. However, ideally, we want to make sure we are only throwing away samples that look similar.

Here is a strategy to achieve that πŸ‘‡
Machine Learning Paper Reviews πŸ”ŽπŸ“œ

Check out this thread for short reviews of some interesting Machine Learning and Computer Vision papers. I explain the basic ideas and main takeaways of each paper in a Twitter thread.

πŸ‘‡ I'm adding new reviews all the time! πŸ‘‡

AlexNet - the paper that started the deep learning revolution in Computer Vision!


DenseNet - reducing the size and complexity of CNNs by adding dense connections between layers.


Playing for data - generating synthetic GT from a video game (GTA V) and using it to improving semantic segmentation models.


Transformers for image recognition - a new paper with the potential to replace convolutions with a transformer.
Machine Learning in the Real World 🧠 πŸ€–

ML for real-world applications is much more than designing fancy networks and fine-tuning parameters.

In fact, you will spend most of your time curating a good dataset.

Let's go through the process together πŸ‘‡

#RepostFriday


Collect Data πŸ’½

We need to represent the real world as accurately as possible. If some situations are underrepresented we are introducing Sampling Bias.

Sampling Bias is nasty because we'll have high test accuracy, but our model will perform badly when deployed.

πŸ‘‡

Traffic Lights 🚦

Let's build a model to recognize traffic lights for a self-driving car. We need to collect data for different:

β–ͺ️ Lighting conditions
β–ͺ️ Weather conditions
β–ͺ️ Distances and viewpoints
β–ͺ️ Strange variants

And if we sample only 🚦 we won't detect πŸš₯ πŸ€·β€β™‚οΈ

πŸ‘‡


Data Cleaning 🧹

Now we need to clean all corrupted and irrelevant samples. We need to remove:

β–ͺ️ Overexposed or underexposed images
β–ͺ️ Images in irrelevant situations
β–ͺ️ Faulty images

Leaving them in the dataset will hurt our model's performance!

πŸ‘‡


Preprocess Data βš™οΈ

Most ML models like their data nicely normalized and properly scaled. Bad normalization can also lead to worse performance (I have a nice story for another time...)

β–ͺ️ Crop and resize all images
β–ͺ️ Normalize all values (usually 0 mean and 1 std. dev.)

πŸ‘‡