These networks are primary focus for compression tasks of data in Machine Learning.
Ever heard of Autoencoders?
The first time I saw a Neural Network with more output neurons than in the hidden layers, I couldn't figure how it would work?!
#DeepLearning #MachineLearning
Here's a little something about them: 🧵👇

These networks are primary focus for compression tasks of data in Machine Learning.
Later when someone needs, can just take that small representation and recreate the original, just like a zip file.📥
Our inputs and outputs are same and a simple euclidean distance can be used as a loss function for measuring the reconstruction.
Of course, we wouldn't expect a perfect reconstruction.
We are just trying to minimize the L here. All the backpropagation rules still hold.

▫️ Can learn non-linear transformations, with non-linear activation functions and multiple layers.
▫️ Doesn't have to learn only from dense layers, can learn from convolutional layers too, better for images, videos right?
▫️ Can make use of pre-trained layers from another model to apply transfer learning to enhance the encoder /decoder
🔸 Image Colouring
🔸 Feature Variation
🔸 Dimensionality Reduction
🔸 Denoising Image
🔸 Watermark Removal