llamaをAlpacaデータセットを使いLoRaでfine tuneしたものが良い感じだったので、Bloomを日本語でfine tuneしてみようと思う 以下をそのまま参考にする とりあえず、fine funeを動かしただけで、ちゃんと学習させてないので注意 HugginfaceのBloomとpeftも参考にする fine tune fine tune対象をBloomに変更 model = LlamaForCausalLM.from_pretrained( "decapoda-research/llama-7b-hf", load_in_8bit=True, device_map=device_map, ) tokenizer = LlamaTokenizer.from_pretrained( "decapoda-research/llama-7b-hf", add_eos_token=
![BloomをLoRaを使い日本語alpaca datasetでfine tuneを動かす - Qiita](https://cdn-ak-scissors.b.st-hatena.com/image/square/a2719265550c94cdc12c09b75173f71029441083/height=288;version=1;width=512/https%3A%2F%2Fqiita-user-contents.imgix.net%2Fhttps%253A%252F%252Fcdn.qiita.com%252Fassets%252Fpublic%252Farticle-ogp-background-9f5428127621718a910c8b63951390ad.png%3Fixlib%3Drb-4.0.0%26w%3D1200%26mark64%3DaHR0cHM6Ly9xaWl0YS11c2VyLWNvbnRlbnRzLmltZ2l4Lm5ldC9-dGV4dD9peGxpYj1yYi00LjAuMCZ3PTkxNiZoPTMzNiZ0eHQ9Qmxvb20lRTMlODIlOTJMb1JhJUUzJTgyJTkyJUU0JUJEJUJGJUUzJTgxJTg0JUU2JTk3JUE1JUU2JTlDJUFDJUU4JUFBJTlFYWxwYWNhJTIwZGF0YXNldCVFMyU4MSVBN2ZpbmUlMjB0dW5lJUUzJTgyJTkyJUU1JThCJTk1JUUzJTgxJThCJUUzJTgxJTk5JnR4dC1jb2xvcj0lMjMyMTIxMjEmdHh0LWZvbnQ9SGlyYWdpbm8lMjBTYW5zJTIwVzYmdHh0LXNpemU9NTYmdHh0LWNsaXA9ZWxsaXBzaXMmdHh0LWFsaWduPWxlZnQlMkN0b3Amcz00NDA4YTgwNzRmODBmMjcwMzM5NmY4NTE1ZTMyZTNkOQ%26mark-x%3D142%26mark-y%3D112%26blend64%3DaHR0cHM6Ly9xaWl0YS11c2VyLWNvbnRlbnRzLmltZ2l4Lm5ldC9-dGV4dD9peGxpYj1yYi00LjAuMCZ3PTcxNiZ0eHQ9JTQwaXNzLWYmdHh0LWNvbG9yPSUyMzIxMjEyMSZ0eHQtZm9udD1IaXJhZ2lubyUyMFNhbnMlMjBXNiZ0eHQtc2l6ZT0zMiZ0eHQtYWxpZ249bGVmdCUyQ3RvcCZzPWIyNjg1MjdkOThmNTEyOTIyYWQxMGMzODcwYjMxYTI0%26blend-x%3D142%26blend-y%3D491%26blend-mode%3Dnormal%26s%3De19f34d0179445942d2d65ec250e66ec)