Ever heard of Autoencoders?

The first time I saw a Neural Network with more output neurons than in the hidden layers, I couldn't figure how it would work?!

#DeepLearning #MachineLearning
Here's a little something about them: 🧵👇

Autoencoders are unsupervised neural networks whose architecture you can picture as two funnels connect from the narrow ends.

These networks are primary focus for compression tasks of data in Machine Learning.
We feed them the data so that they can learn the most important features, a smaller representation while keep the integrity of the data.

Later when someone needs, can just take that small representation and recreate the original, just like a zip file.📥
Being unsupervised, they require no labels.
Our inputs and outputs are same and a simple euclidean distance can be used as a loss function for measuring the reconstruction.

Of course, we wouldn't expect a perfect reconstruction.
We can think of an autoencoder having two components, encoder and decoder, represented by the below equations:

We are just trying to minimize the L here. All the backpropagation rules still hold.
Advantages over PCA:

▫️ Can learn non-linear transformations, with non-linear activation functions and multiple layers.

▫️ Doesn't have to learn only from dense layers, can learn from convolutional layers too, better for images, videos right?
▫️ More efficient to learn several layers with auto-encoders rather than one huge transformation with PCA

▫️ Can make use of pre-trained layers from another model to apply transfer learning to enhance the encoder /decoder
Some Common Applications:

🔸 Image Colouring
🔸 Feature Variation
🔸 Dimensionality Reduction
🔸 Denoising Image
🔸 Watermark Removal
Some famous types of autoencoders:

🔹 Convolution Autoencoders
🔹 Sparse Autoencoders
🔹 Deep Autoencoders
🔹 Contractive Autoencoders
Here's the first implementation that I did for dimensionality reduction a couple years, minimal code.
🔗https://t.co/AfAdbA6zMi

More from Machine learning

Really enjoyed digging into recent innovations in the football analytics industry.

>10 hours of interviews for this w/ a dozen or so of top firms in the game. Really grateful to everyone who gave up time & insights, even those that didnt make final cut 🙇‍♂️ https://t.co/9YOSrl8TdN


For avoidance of doubt, leading tracking analytics firms are now well beyond voronoi diagrams, using more granular measures to assess control and value of space.

This @JaviOnData & @LukeBornn paper from 2018 referenced in the piece demonstrates one method
https://t.co/Hx8XTUMpJ5


Bit of this that I nerded out on the most is "ghosting" — technique used by @counterattack9 & co @stats_insights, among others.

Deep learning models predict how specific players — operating w/in specific setups — will move & execute actions. A paper here: https://t.co/9qrKvJ70EN


So many use-cases:
1/ Quickly & automatically spot situations where opponent's defence is abnormally vulnerable. Drill those to death in training.
2/ Swap target player B in for current player A, and simulate. How does target player strengthen/weaken team? In specific situations?

You May Also Like

A brief analysis and comparison of the CSS for Twitter's PWA vs Twitter's legacy desktop website. The difference is dramatic and I'll touch on some reasons why.

Legacy site *downloads* ~630 KB CSS per theme and writing direction.

6,769 rules
9,252 selectors
16.7k declarations
3,370 unique declarations
44 media queries
36 unique colors
50 unique background colors
46 unique font sizes
39 unique z-indices

https://t.co/qyl4Bt1i5x


PWA *incrementally generates* ~30 KB CSS that handles all themes and writing directions.

735 rules
740 selectors
757 declarations
730 unique declarations
0 media queries
11 unique colors
32 unique background colors
15 unique font sizes
7 unique z-indices

https://t.co/w7oNG5KUkJ


The legacy site's CSS is what happens when hundreds of people directly write CSS over many years. Specificity wars, redundancy, a house of cards that can't be fixed. The result is extremely inefficient and error-prone styling that punishes users and developers.

The PWA's CSS is generated on-demand by a JS framework that manages styles and outputs "atomic CSS". The framework can enforce strict constraints and perform optimisations, which is why the CSS is so much smaller and safer. Style conflicts and unbounded CSS growth are avoided.
1. Project 1742 (EcoHealth/DTRA)
Risks of bat-borne zoonotic diseases in Western Asia

Duration: 24/10/2018-23 /10/2019

Funding: $71,500
@dgaytandzhieva
https://t.co/680CdD8uug


2. Bat Virus Database
Access to the database is limited only to those scientists participating in our ‘Bats and Coronaviruses’ project
Our intention is to eventually open up this database to the larger scientific community
https://t.co/mPn7b9HM48


3. EcoHealth Alliance & DTRA Asking for Trouble
One Health research project focused on characterizing bat diversity, bat coronavirus diversity and the risk of bat-borne zoonotic disease emergence in the region.
https://t.co/u6aUeWBGEN


4. Phelps, Olival, Epstein, Karesh - EcoHealth/DTRA


5, Methods and Expected Outcomes
(Unexpected Outcome = New Coronavirus Pandemic)