2025年度人工知能学会全国大会(第39回) アプリズム社員講演のお知らせ
2025年度人工知能学会全国大会(第39回)が大阪にて5月27日(火)~30日(金)に開催され、
5月29日(木)14:00よりアプリズムより研究成果を発表いたしますことをご報告させていただきます。
https://confit.atlas.jp/guide/event/jsai2025/subject/3K4-IS-2a-02/advanced?cryptoId=
(引用元)
[3K4-IS-2a-02] Cross-Lingual Finetuning in Large Language Models
〇Jude McCutcheon1 (1. Apprhythm Co., Ltd.)
キーワード:LLM, Finetuning, Cross-lingual
Large Language Models (LLMs) have set new benchmarks in various fields, achieving higher task performance with smaller training datasets.
This success is largely attributed to the pretrain-finetune paradigm: models are first pretrained on extensive unlabeled corpora to
develop general language understanding and then fine-tuned on smaller labeled datasets for specific tasks.
While effective for many languages and tasks, this approach remains challenging for lower-resource languages, where labeled task data is scarce.
Even Japanese, a higher-resource language, is held back by the relative scarcity of task-specific datasets. However, leveraging the wealth of
English-language resources through cross-linguistic training offers a promising solution.
This study investigates the cross-linguistic generalization capabilities of LLMs by fine-tuning a monolingual English model and
its continually pretrained Japanese counterpart on English task datasets and evaluating them on comparable Japanese tasks.
Our findings reveal that much of the task-specific knowledge imparted during fine-tuning transcends language boundaries,
positing cross-lingual fine-tuning as a powerful strategy for enhancing LLM performance in lower-resource languages.
2025年度人工知能学会全国大会(第39回)
開催期間:2025年5月27日(火)~30日(金)
会場:大阪国際会議場(グランキューブ大阪)+ オンライン