サクサク読めて、アプリ限定の機能も多数!
トップへ戻る
体力トレーニング
lilianweng.github.io
Date: June 23, 2023 | Estimated Reading Time: 31 min | Author: Lilian Weng Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerfu
Date: March 15, 2023 | Estimated Reading Time: 21 min | Author: Lilian Weng Prompt Engineering, also known as In-Context Prompting, refers to methods for how to communicate with LLM to steer its behavior for desired outcomes without updating the model weights. It is an empirical science and the effect of prompt engineering methods can vary a lot among models, thus requiring heavy experimentation a
Date: January 27, 2023 | Estimated Reading Time: 46 min | Author: Lilian Weng Many new Transformer architecture improvements have been proposed since my last post on “The Transformer Family” about three years ago. Here I did a big refactoring and enrichment of that 2020 post — restructure the hierarchy of sections and improve many sections with more recent papers. Version 2.0 is a superset of the
Date: January 10, 2023 | Estimated Reading Time: 9 min | Author: Lilian Weng [Updated on 2023-01-24: add a small section on Distillation.] Large transformer models are mainstream nowadays, creating SoTA results for a variety of tasks. They are powerful but very expensive to train and use. The extremely high inference cost, in both time and memory, is a big bottleneck for adopting a powerful transf
Date: May 31, 2021 | Estimated Reading Time: 39 min | Author: Lilian Weng The goal of contrastive representation learning is to learn such an embedding space in which similar sample pairs stay close to each other while dissimilar ones are far apart. Contrastive learning can be applied to both supervised and unsupervised settings. When working with unsupervised data, contrastive learning is one of
Date: July 11, 2021 | Estimated Reading Time: 32 min | Author: Lilian Weng [Updated on 2021-09-19: Highly recommend this blog post on score-based generative modeling by Yang Song (author of several key papers in the references)]. [Updated on 2022-08-27: Added classifier-free guidance, GLIDE, unCLIP and Imagen. [Updated on 2022-08-31: Added latent diffusion model. [Updated on 2024-04-13: Added prog
"> ← Curriculum For Reinforcement Learning Exploration Strategies In Deep Reinforcement Learning →
"> ← Implementing Deep Reinforcement Learning Models From Autoencoder To Beta Vae →
"> ← Flow Based Deep Generative Models Object Detection Part 4 →
"> ← How To Explain The Prediction Of A Machine Learning Model Anatomize Deep Learning With Information Theory →
このページを最初にブックマークしてみませんか?
『lilianweng.github.io』の新着エントリーを見る
j次のブックマーク
k前のブックマーク
lあとで読む
eコメント一覧を開く
oページを開く