If you want to remember everything you learn, take 3 minutes to read this:

According to psychology professor Paul Reber, our brains have the ability to store up to the equivalent of three million hours of TV…

But most people don’t ever come close to that capacity.

So how do you maximize your memory?

Follow these steps…
You have 2 forms of memory:

1. Long term memory

It stores:

• Skills
• Important days
• Repeated information

2. Working memory

It stores small bits of information that can be used for cognitive tasks.
The more information to your long-term memory, the more valuable it becomes.

You can build things upon each other and make valuable connections.

But here’s the thing…
To store information in long-term memory, it has to go through working memory.

That's how your brain filters out info you don’t need for a long time like:

• Your plane’s flight number

From important info like:

• Your phone number

But there’s a big problem with our memory…
Overstimulation.

Our brains are overloaded with notifications, ads, and TikToks.

Because of this, our brains have a hard time sorting through what’s important.

So, here are 4 ways you can drastically improve your memory:
Use the Feynman Technique.

This is the best way to learn a new topic:

1. Write out what you want to learn
2. Simplify it for a 5 year old
3. Review what you didn’t understand
4. Refine until it’s in its simplest form
Turn off notifications.

Each alert drains your dopamine + focus.

Because of this, your mind is flooded with information which makes it 10x harder to recall important info.

Make it a habit to go long periods of time without checking your phone.
Use active recall.

After learning something important, keep reviewing the information.

This signals to your brain “hey this is important.”

Over enough time, you’ll store it in your long-term memory.
Use spaced repetition.

This is a learning strategy where lessons are repeatedly reviewed at increasing intervals to ensure important information is memorized.
These are the 4 steps you need to take to use spaced repetition:

1. Plan the intervals of your study sessions
2. Review and study the information for the first time
3. Recall the information at the first spacing interval
4. Keep recalling the information at chosen intervals
How to remember everything you learn:

1. Use The Feynman Technique
2. Turn off notifications
3. Use active recall
4. Use spaced repetition
Enjoyed this thread?

Retweet the first tweet to help others.

Want more content like this?

Follow me @mpickle for content on improving your life through productivity. https://t.co/HA9og8uN6G

More from All

॥ॐ॥
अस्य श्री गायत्री ध्यान श्लोक:
(gAyatri dhyAna shlOka)
• This shloka to meditate personified form of वेदमाता गायत्री was given by Bhagwaan Brahma to Sage yAgnavalkya (याज्ञवल्क्य).

• 14th shloka of गायत्री कवचम् which is taken from वशिष्ठ संहिता, goes as follows..


• मुक्ता-विद्रुम-हेम-नील धवलच्छायैर्मुखस्त्रीक्षणै:।
muktA vidruma hEma nIla dhavalachhAyaiH mukhaistrlkShaNaiH.

• युक्तामिन्दुकला-निबद्धमुकुटां तत्वार्थवर्णात्मिकाम्॥
yuktAmindukalA nibaddha makutAm tatvArtha varNAtmikam.

• गायत्रीं वरदाभयाङ्कुश कशां शुभ्रं कपालं गदाम्।
gAyatrIm vardAbhayANkusha kashAm shubhram kapAlam gadAm.

• शंखं चक्रमथारविन्दयुगलं हस्तैर्वहन्ती भजै॥
shankham chakramathArvinda yugalam hastairvahantIm bhajE.

This shloka describes the form of वेदमाता गायत्री.

• It says, "She has five faces which shine with the colours of a Pearl 'मुक्ता', Coral 'विद्रुम', Gold 'हेम्', Sapphire 'नील्', & a Diamond 'धवलम्'.

• These five faces are symbolic of the five primordial elements called पञ्चमहाभूत:' which makes up the entire existence.

• These are the elements of SPACE, FIRE, WIND, EARTH & WATER.

• All these five faces shine with three eyes 'त्रिक्षणै:'.
How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like