3つの要点 ✔️ 自然言語の発展に大いに貢献 ✔️ 学習しなくても前に接続するだけで精度が向上 ✔️ 入出力に新規性 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding written by Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova (Submitted on 11 Oct 2018 (v1), last revised 24 May 2019 (this version, v2)) Comments: Published by NAACL-HLT 2019 Subjects: Computation and Language (cs.CL) はじめに 2019年2月に自然言語処理のトップカンファレンス
![Googleが公開した自然言語処理の最新技術、BERTとは何者なのか](https://cdn-ak-scissors.b.st-hatena.com/image/square/a202ec52057de453b381024012a681cf73cc2277/height=288;version=1;width=512/https%3A%2F%2Faisholar.s3.ap-northeast-1.amazonaws.com%2Fmedia%2FJune2020%2F%25E8%2587%25AA%25E7%2584%25B6%25E8%25A8%2580%25E8%25AA%259E%25E3%2581%25AE%25E7%2599%25BA%25E5%25B1%2595%25E3%2581%25AB%25E5%25A4%25A7%25E3%2581%2584%25E3%2581%25AB%25E8%25B2%25A2%25E7%258C%25AE_1_.png)