Yoga Ramar Temple
#Nedungunam
#TamilNaduTemples 🚩

Yoga Ramar Temple is located near Thiruvannamalai. Shri Rama is seated with eyes closed, without any weapons. Mata Sita  pointing to the feet of Rama telling that  everyone is safe at his feet. Lakshmana standing near Shri Rama

Hanuman is seen chanting Vedas obediently sitting before Shri Rama.

Sthala Puranam says that, Sukabrahma Maharishi had been doing Tapasya here, for the darshan of Sri Rama. Prabhu Shri Ram, pleased with his tapasya, came here after Rama Ravana yudham. Since Rama was returning
after the war, he did not have any weapons with him.

Another gopuram called Kili Gopuram is there for Sukabrahma Maharishi. There is a hill nearby called Dheergachala. According to Sthala puranam, Sri Padham of Prabhu Rama are found on the top of the hill
It is said that this is the place where Shri Rama gave darshan to Suka Maharishi. Nedumgunam means which shows all qualities of Shri Rama who is Maryadha purushottam. Shri Ramanavami is very famous here.

Jai Sri Ram
🙏🕉🙏

More from Anu Satheesh 🇮🇳

More from All

How can we use language supervision to learn better visual representations for robotics?

Introducing Voltron: Language-Driven Representation Learning for Robotics!

Paper: https://t.co/gIsRPtSjKz
Models: https://t.co/NOB3cpATYG
Evaluation: https://t.co/aOzQu95J8z

🧵👇(1 / 12)


Videos of humans performing everyday tasks (Something-Something-v2, Ego4D) offer a rich and diverse resource for learning representations for robotic manipulation.

Yet, an underused part of these datasets are the rich, natural language annotations accompanying each video. (2/12)

The Voltron framework offers a simple way to use language supervision to shape representation learning, building off of prior work in representations for robotics like MVP (
https://t.co/Pb0mk9hb4i) and R3M (https://t.co/o2Fkc3fP0e).

The secret is *balance* (3/12)

Starting with a masked autoencoder over frames from these video clips, make a choice:

1) Condition on language and improve our ability to reconstruct the scene.

2) Generate language given the visual representation and improve our ability to describe what's happening. (4/12)

By trading off *conditioning* and *generation* we show that we can learn 1) better representations than prior methods, and 2) explicitly shape the balance of low and high-level features captured.

Why is the ability to shape this balance important? (5/12)

You May Also Like

Great article from @AsheSchow. I lived thru the 'Satanic Panic' of the 1980's/early 1990's asking myself "Has eveyrbody lost their GODDAMN MINDS?!"


The 3 big things that made the 1980's/early 1990's surreal for me.

1) Satanic Panic - satanism in the day cares ahhhh!

2) "Repressed memory" syndrome

3) Facilitated Communication [FC]

All 3 led to massive abuse.

"Therapists" -and I use the term to describe these quacks loosely - would hypnotize people & convince they they were 'reliving' past memories of Mom & Dad killing babies in Satanic rituals in the basement while they were growing up.

Other 'therapists' would badger kids until they invented stories about watching alligators eat babies dropped into a lake from a hot air balloon. Kids would deny anything happened for hours until the therapist 'broke through' and 'found' the 'truth'.

FC was a movement that started with the claim severely handicapped individuals were able to 'type' legible sentences & communicate if a 'helper' guided their hands over a keyboard.