I'll blow your mind with a technique you aren't using yet.
Sometimes, you want your system to do exactly the opposite of what your machine learning model thinks you should do.
Let me convince you. ↓
I'm going to start with a nice problem:
Imagine a model that looks at a picture of an electrical transformer and predicts whether it's about to break or not.
Don't worry about how the model does this. We are going to focus on the results instead.
There are 4 possible results for this model:
1. It predicts a bad unit as bad.
2. It predicts a bad unit as good.
3. It predicts a good unit as bad.
4. It predicts a good unit as good.
#2 and #3 are the mistakes the model makes.
Assuming we run 100 units through the model, we can organize the results in a matrix:
• The rows represent the "actual" condition of the transformer.
• The columns represent the "prediction" of the model.
We call this a "Confusion Matrix."
This is how we can read this confusion matrix:
• 60 bad units were predicted as bad.
• 3 bad units were predicted as good.
• 7 good units were predicted as bad.
• 30 good units were predicted as good.