In recent years, artificial intelligence based on deep neural networks (DNNs) has made remarkable progress. Particularly in natural language processing (NLP), various DNN-based language models have emerged, using Transformer architectures that are pre-trained on large-scale text data. These pre-trained large language models (LLMs) have demonstrated high accuracy across a range of NLP tasks, leadin