Important paper from Google on large batch optimization. They do impressively careful experiments measuring # iterations needed to achieve target validation error at various batch sizes. The main "surprise" is the lack of surprises. [thread]

The paper is a good example of lots of elements of good experimental design. They validate their metric by showing lots of variants give consistent results. They tune hyperparamters separately for each condition, check that optimum isn't at the endpoints, and measure sensitivity.
They have separate experiments where the hold fixed # iterations and # epochs, which (as they explain) measure very different things. They avoid confounds, such as batch norm's artificial dependence between batch size and regularization strength.
When the experiments are done carefully enough, the results are remarkably consistent between different datasets and architectures. Qualitatively, MNIST behaves just like ImageNet.
Importantly, they don't find any evidence for a "sharp/flat optima" effect whereby better optimization leads to worse final results. They have a good discussion of experimental artifacts/confounds in past papers where such effects were reported.
The time-to-target-validation is explained purely by optimization considerations. There's a regime where variance dominates, and you get linear speedups w/ batch size. Then there's a regime where curvature dominates and larger batches don't help. As theory would predict.
Incidentally, this paper must have been absurdly expensive, even by Google's standards. Doing careful empirical work on optimizers requires many, many runs of the algorithm. (I think surprising phenomena on ImageNet are often due to the difficulty of running proper experiments.)

More from Machine learning

Happy 2⃣0⃣2⃣1⃣ to all.🎇

For any Learning machines out there, here are a list of my fav online investing resources. Feel free to add yours.

Let's dive in.

Investing Services

✔️ @themotleyfool - @TMFStockAdvisor & @TMFRuleBreakers services

✔️ @7investing

✔️ @investing_city

✔️ @MorningstarInc Premium

✔️ @SeekingAlpha Marketplaces (Check your area of interest, Free trials, Quality, track record...)

General Finance/Investing

✔️ @morganhousel

✔️ @dollarsanddata

✔️ @awealthofcs

✔️ @iancassel

✔️ @InvestorAmnesia


Tech focused

✔️ @stratechery

✔️ @bgurley

✔️ @CBinsights

✔️ @benedictevans


Tech Deep dives

✔️ @StackInvesting

✔️ @hhhypergrowth

✔️ @Beth_Kindig

✔️ @SeifelCapital

✔️ @borrowed_ideas

You May Also Like


RT-PCR corona (test) scam

Symptomatic people are tested for one and only one respiratory virus. This means that other acute respiratory infections are reclassified as


It is tested exquisitely with a hypersensitive non-specific RT-PCR test / Ct >35 (>30 is nonsense, >35 is madness), without considering Ct and clinical context. This means that more acute respiratory infections are reclassified as


The Drosten RT-PCR test is fabricated in a way that each country and laboratory perform it differently at too high Ct and that the high rate of false positives increases massively due to cross-reaction with other (corona) viruses in the "flu


Even asymptomatic, previously called healthy, people are tested (en masse) in this way, although there is no epidemiologically relevant asymptomatic transmission. This means that even healthy people are declared as COVID


Deaths within 28 days after a positive RT-PCR test from whatever cause are designated as deaths WITH COVID. This means that other causes of death are reclassified as