Authors Alejandro Piad Morffis

7 days 30 days All time Recent Popular
This is a Twitter series on #FoundationsOfML.

โ“ Today, I want to start discussing the different types of Machine Learning flavors we can find.

This is a very high-level overview. In later threads, we'll dive deeper into each paradigm... ๐Ÿ‘‡๐Ÿงต

Last time we talked about how Machine Learning works.

Basically, it's about having some source of experience E for solving a given task T, that allows us to find a program P which is (hopefully) optimal w.r.t. some metric


According to the nature of that experience, we can define different formulations, or flavors, of the learning process.

A useful distinction is whether we have an explicit goal or desired output, which gives rise to the definitions of 1๏ธโƒฃ Supervised and 2๏ธโƒฃ Unsupervised Learning ๐Ÿ‘‡

1๏ธโƒฃ Supervised Learning

In this formulation, the experience E is a collection of input/output pairs, and the task T is defined as a function that produces the right output for any given input.

๐Ÿ‘‰ The underlying assumption is that there is some correlation (or, in general, a computable relation) between the structure of an input and its corresponding output and that it is possible to infer that function or mapping from a sufficiently large number of examples.