The selection of areas and methods is heavily influenced by my own interests; the selected topics are biased towards representation and transfer learning and towards natural language processing (NLP). I tried to cover the papers that I was aware of but likely missed many relevant ones—feel free to highlight them in the comments below. In all, I discuss the following highlights: Scaling up—and down
![ML and NLP Research Highlights of 2020](https://cdn-ak-scissors.b.st-hatena.com/image/square/6176617db299e197cc5053c40f60ea286a43eca6/height=288;version=1;width=512/https%3A%2F%2Fwww.ruder.io%2Fcontent%2Fimages%2F2021%2F01%2Flra_analysis-2.png)