This will certainly be a required styling choice for me. Same delightful benefits of trailing comma in multi-line PHP arrays, and JavaScript objects.
๐๐ PHP 7.3 is here!
Here are the things I'm excited about.
This will certainly be a required styling choice for me. Same delightful benefits of trailing comma in multi-line PHP arrays, and JavaScript objects.
Inlining heredoc strings in any way right now is grrrosssss. Now we get sensible capabilities. Everything that was wrong with it is now fixed!
(Ignore the bad syntax highlighting)
This really sucked before, now it just sucks a bit less (who wants to pass a 4th param and pass 2 default params first? (helper function anybody?)
Before you either strung a bunch of functions together or messed with internal array pointers. This is a much-needed improvement.
More from Tech
THREAD: How is it possible to train a well-performing, advanced Computer Vision model ๐ผ๐ป ๐๐ต๐ฒ ๐๐ฃ๐จ? ๐ค
At the heart of this lies the most important technique in modern deep learning - transfer learning.
Let's analyze how it
2/ For starters, let's look at what a neural network (NN for short) does.
An NN is like a stack of pancakes, with computation flowing up when we make predictions.
How does it all work?
3/ We show an image to our model.
An image is a collection of pixels. Each pixel is just a bunch of numbers describing its color.
Here is what it might look like for a black and white image
4/ The picture goes into the layer at the bottom.
Each layer performs computation on the image, transforming it and passing it upwards.
5/ By the time the image reaches the uppermost layer, it has been transformed to the point that it now consists of two numbers only.
The outputs of a layer are called activations, and the outputs of the last layer have a special meaning... they are the predictions!
At the heart of this lies the most important technique in modern deep learning - transfer learning.
Let's analyze how it
THREAD: Can you start learning cutting-edge deep learning without specialized hardware? \U0001f916
— Radek Osmulski (@radekosmulski) February 11, 2021
In this thread, we will train an advanced Computer Vision model on a challenging dataset. \U0001f415\U0001f408 Training completes in 25 minutes on my 3yrs old Ryzen 5 CPU.
Let me show you how...
2/ For starters, let's look at what a neural network (NN for short) does.
An NN is like a stack of pancakes, with computation flowing up when we make predictions.
How does it all work?
3/ We show an image to our model.
An image is a collection of pixels. Each pixel is just a bunch of numbers describing its color.
Here is what it might look like for a black and white image
4/ The picture goes into the layer at the bottom.
Each layer performs computation on the image, transforming it and passing it upwards.
5/ By the time the image reaches the uppermost layer, it has been transformed to the point that it now consists of two numbers only.
The outputs of a layer are called activations, and the outputs of the last layer have a special meaning... they are the predictions!