GPTのモデル構造を目で見てみたい! そんな気持ち、わかるでしょ? 技研フリマをやりながら、どうにかこうにか出力したよ ご覧あれ やり方メモ from transformers import AutoTokenizer, AutoModelForCausalLM from torchviz import make_dot tokenizer = AutoTokenizer.from_pretrained("gpt2") from transformers import pipeline, set_seed generator = pipeline('text-generation', model='gpt2') m= generator.model x= m.generate() y= m.forward(x) image = make_dot(y.logits, params=dict(
![GPTのモデル構造を可視化した|shi3z](https://cdn-ak-scissors.b.st-hatena.com/image/square/7ebda457054d05397a5772bbc4ffd66c2bb5c4ec/height=288;version=1;width=512/https%3A%2F%2Fassets.st-note.com%2Fproduction%2Fuploads%2Fimages%2F107862984%2Frectangle_large_type_2_98b1aa2b18aa2006416a4acb78e11935.png%3Ffit%3Dbounds%26quality%3D85%26width%3D1280)