タグ

2021年8月25日のブックマーク (1件)

  • Program Synthesis with Large Language Models

    This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize

    Ryobot
    Ryobot 2021/08/25
    “モデルのコレクション(244Mから137Bのパラメータを使用)を、2つの新しいベンチマーク、MBPPとMathQA-Pythonで、Few-shotとFine-tuningの両方のレジームで評価します。”十分に大きいモデルサイズだとFSとFTに差がないね