@spyrosmakrid and his team listen to the community's feedback and offer, within a window of three years, two fantastic, insightful and impactful forecasting competitions, the M4 and M5. I am tired with all the "sceptics" that try to dismiss the competitions' results.
1/8
First, the ongoing discussion on error measures. It seems that everyone in the forecasting community has devised their own measures, and if someone is using another one then s/he is doomed. The real question is: does the chosen error measure fit the purpose of the exercise?
2/8
Second, the disbelief on the results especially from people that did not participate in the competitions. I have said that before, but skin in the game is important in forecasting (if you want to call your self a forecaster).
3/8
The fact that a method won a competition does not mean that it will win all future competitions. But, there are important lessons to be learnt. In the case of M3 and Theta: the value of information decomposition. In the case of M4 & M5: the value of cross-learning.
4/8
Such insights should be taken into account when designing future methods and algorithms. Since 2000, I have seen many new implementations of the Theta method that have improved on the original proposition. The same will surely be the case with the top methods of M4 & M5.
5/8