Everything you need to know about the math for machine learning as a beginner.

🧵👇

Before diving into the math, I suggest first having solid programming skills.

For example👇

(2 / 17)
In Python, these are the concepts which you must know:

- Object oriented programming in Python : Classes, Objects, Methods
- List slicing
- String formatting
- Dictionaries & Tuples
- Basic terminal commands
- Exception handling

(3 / 17)
If you want to learn python, these courses are freecodecamp could be of help to you.

🔗Basics: youtube .com/watch?v=rfscVS0vtbw
🔗Intermediate :youtube .com/watch?v=HGOBQPFzWKo

(4 / 17)
You need to have really strong fundamentals in programming, because machine learning involves a lot of it.

It is 100% compulsory.

(5 / 17)
Another question that I get asked quite often is when should you start learning the math for machine learning?

(6 / 17)
Math for machine learning should come after you have worked on some projects, doesn't have to a complex one at all, but one that gives you a taste of how machine learning works in the real world.

(7 / 17)
Here's how I do it, I look at the math when I have a need for it.

For instance I was recently competing in a kaggle challenge.

(8 / 17)
I was brainstorming about which activation function to use in a part of my neural net, I looked up the math behind each activation function and this helped me to choose the right one.

(9 / 17)
The topics of math you'll have to focus on
- Linear Algebra
- Calculus
- Trigonometry
- Algebra
- Statistics
- Probability

Now here are the math resources and a brief description about them.

(10 / 17)
Neural Networks
> A series of videos that go over how neural networks work with approach visual, must watch

🔗youtube. com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

(11 / 17)
Seeing Theory
> This website gives you an interactive to learn statistics and probability

🔗seeing-theory. brown. edu/basic-probability/index.html

(12 / 17)
Gilbert Strang lectures on Linear Algebra (MIT)
> They're 15 years old but still 100% relevant today!
Despite the fact these lectures are for freshman college students ,I found it very easy to follow.

🔗youtube. com/playlist?list=PL49CF3715CB9EF31D

(13 / 17)
Essence of Linear Algebra
> A beautifully crafted set of videos which teach you linear algebra through visualisations in an easy to digest manner

🔗youtube. com/watch?v=fNk_zzaMoSs&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

(14 / 17)
Khan Academy
>The resource you must refer to when you forget something or want to revise a topic.

🔗khanacademy. org/math

(15 / 17)
Essence of calculus
> A beautiful series on calculus, makes everything seem super simple

🔗youtube. com/watch?v=WUvTyaaNkzM&list=PL0-GT3co4r2wlh6UHTUeQsrf3mlS2lk6x

(16 / 17)
The math for Machine learning e-book
> This is a book aimed for someone who knows a decent amount of high school math like trignometry, calculus etc.

I suggest reading this after having the fundamentals down on khan academy.

🔗mml-book. github .io

(17 / 17)

More from Pratham Prasoon

More from Machine learning

Starting a new project using #Angular? Here is a list of all the stuff i use to launch my projects the fastest i can.

A THREAD 👇

Have you heard about Monorepo? I created one with all my Angular (and Nest) projects using
https://t.co/aY5llDtXg8.

I can share A LOT of code with it. Ex: Everytime i start a new project, i just need to import an Auth lib, that i created, and all Auth related stuff is set up.

Everyone in the Angular community knows about https://t.co/kDnunQZnxE. It's not the most beautiful component library out there, but it's good and easy to work with.

There's a bunch of state management solutions for Angular, but https://t.co/RJwpn74Qev is by far my favorite.

There's a lot of boilerplate, but you can solve this with the built-in schematics and/or with your own schematics

Are you not using custom schematics yet? Take a look at this:

https://t.co/iLrIaHVafm
https://t.co/3382Tn2k7C

You can automate all the boilerplate with hundreds of files associates with creating a new feature.
This is a Twitter series on #FoundationsOfML.

❓ Today, I want to start discussing the different types of Machine Learning flavors we can find.

This is a very high-level overview. In later threads, we'll dive deeper into each paradigm... 👇🧵

Last time we talked about how Machine Learning works.

Basically, it's about having some source of experience E for solving a given task T, that allows us to find a program P which is (hopefully) optimal w.r.t. some metric


According to the nature of that experience, we can define different formulations, or flavors, of the learning process.

A useful distinction is whether we have an explicit goal or desired output, which gives rise to the definitions of 1️⃣ Supervised and 2️⃣ Unsupervised Learning 👇

1️⃣ Supervised Learning

In this formulation, the experience E is a collection of input/output pairs, and the task T is defined as a function that produces the right output for any given input.

👉 The underlying assumption is that there is some correlation (or, in general, a computable relation) between the structure of an input and its corresponding output and that it is possible to infer that function or mapping from a sufficiently large number of examples.

You May Also Like

My top 10 tweets of the year

A thread 👇

https://t.co/xj4js6shhy


https://t.co/b81zoW6u1d


https://t.co/1147it02zs


https://t.co/A7XCU5fC2m