Buzz Chronicles
Follow
  • Home
  • Threads
    • Daily Charts
    • Most Popular
    • Most Recent
  • Authors
  • Categories
    • Life
    • Tech
    • Culture
    • Politics
    • Society
    • Fun
    • See All Categories
  • About

Authors Greg Yang

7 days 30 days All time Recent Popular
Greg Yang
Greg Yang
@TheGregYang
1/ A ∞-wide NN of *any architecture* is a Gaussian process (GP) at init. The NN in fact evolves linearly in function space under SGD, so is a GP at *any time* during training. https://t.co/v1b6kndqCk With Tensor Programs, we can calculate this time-evolving GP w/o trainin any NN


2/ In this gif, narrow relu networks have high probability of initializing near the 0 function (because of relu) and getting stuck. This causes the function distribution to become multi-modal over time. However, for wide relu networks this is not an issue.

3/ This time-evolving GP depends on two kernels: the kernel describing the GP at init, and the kernel describing the linear evolution of this GP. The former is the NNGP kernel, and the latter is the Neural Tangent Kernel (NTK).

4/ Once we have these two kernels, we can derive the GP mean and covariance at any time t via straightforward linear algebra.


5/ So it remains to calculate the NNGP kernel and NT kernel for any given architecture. The first is described in
https://t.co/cFWfNC5ALC and in this thread
DATA SCIENCE
  • Page 1 of 1
How does it work?
  • 💬 Reply to a thread with "@buzz_chronicles save" or "@buzz_chronicles save as category"
    🤖 Our bot will send you a link to your own folder on Buzz Chronicles. The thread will be saved in a form of an easy-to-read article
    📁 All your saved threads will be available at buzzchronicles.com/your_twitter_handle
Buzz Chronicles
  • Explore
  • Threads
  • Daily Charts
  • Authors
  • Categories
  • About
  • Terms of Service

Copyright © 2021 Buzz Chronicles - All right reserved