Yuki Arase


Associate professor, Osaka University

View My GitHub Profile

Yuki Arase

Associate Professor (Japanese page)

Big Data Engineering Laboratory,
Graduate school of information science and technology,
Osaka University, Japan

Email: arase[at]ist.osaka-u.ac.jp
Twitter: @Yuki_arase

Yuki Arase is an associate professor at the graduate school of information science and technology, Osaka University, Japan. She was previously an associate researcher at the natural language computing group of Microsoft Research Asia. Her primary research interest is in paraphrasing, conversation systems, and educational applications for language learners. She earned her Ph. D. of Information Science at Osaka University in 2010 for research on presenting a large amount of information on small screens.

Apart from research, Yuki enjoys yoga practice, always traveling with a yoga mat in her luggage trolley.

I’m recruiting PhD students starting from Oct. 2022 or Apr. 2023. Please send me your CV with a publication list if you are interested. Note that we do not have “research student” positions.

Publications / CV / ACL Anthology / Google Scholar / Semantic Scholar


(Last update: 2023/05/31)


For more details, please see Research page.

Paraphrase generation & recognition

Paraphrasing takes various forms of monolingual text transformations, such as text simplification, rewriting, and style transfer. We work on both recognition and generation. The core technologies are intelligent phrase alignment and controllable paraphrase generation.

Related papers: DIRECT, phrase alignment, Round-trip translation for paraphrasing, SAPPHIRE

Representation learning

Vector representation of words, phrases, and sentences are the very basis for NLP research. We study

  1. sophisticated representations for word meaning in context and multilingual sentences,
  2. efficient pre-trained models for words and phrases, and
  3. representations for few-shot learning.

Related papers: WiC representation, disentangling sentence meaning, transfer fine-tuning, label representation for few-shot learning, tiny word embedding

NLP for language education & learning

As a central application of our research outcomes, we develop technologies for language learning and education supports. Our technology covers from fine-grained lexical-level transformations to coarse-grained text-level processing.

Related papers: definition generation, controllable text simplification, CEFR-based lexical simplification, fill-in-the-blank quiz generation

Selected Recent Publications

Academic Service