Kyuyoung Kim

Ph.D. student, KAIST AI

kykim [AT] cs.stanford.edu

Bio

I am a 3rd year Ph.D. student at the Algorithmic Intelligence Lab at KAIST AI. My research focuses on developing efficient and safe methods for enhancing, aligning, and personalizing generative models, with an emphasis on large language models. I have broad research interests, including generative models, reinforcement learning from human feedback, AI safety, and nature-inspired intelligence, among others. I always strive to more deeply understand the fundamental (mathematical) principles behind everything I work on, though it's a lot challenging in general :-)

Previously, I worked as a senior software engineer at Google, developing machine learning methods to localize Google Assistant for low-resource languages. I also worked in Display Ads, building the ads backend system and enhancing auction algorithms for improved user experience. I received an MS in computer science from Stanford University and a BS in computer science with a minor in applied mathematics from Cornell University.

Publications

Most recent publications on Google Scholar.
indicates equal contribution.

Self-Refining Language Model Anonymizers via Adversarial Distillation

K. Kim, H. Jeon, J. Shin

arXiv preprint arXiv:2506.01420

Personalized Language Models via Privacy-Preserving Evolutionary Model Merging

K. Kim, J. Shin, J. Kim

EMNLP 2025

Mamba Drafters for Speculative Decoding

D. Choi, S. Oh, S. Dingliwal, J. Tack, K. Kim, W. Song, S. Kim, I. Han, J. Shin, A. Galstyan, S. Katiyar, S. B. Bodapati

EMNLP 2025 Findings
ICML 2025 Workshop on Efficient Systems for Foundation Models (ES-FoMo III)

Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents

D. Lee, J. Lee, K. Kim, J. Tack, J. Shin, Y. W. Teh, K. Lee.

ICLR 2025

Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning

J. Nam, K. Kim, S. Oh, J. Tack, J. Kim, J. Shin

NeurIPS 2024

Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback

K. Kim, A. Seo, H. Liu, J. Shin, K. Lee

EMNLP 2024 Findings

Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models

K. Kim, J. Jeong, M. An, M. Ghavamzadeh, K. Dvijotham, J. Shin, K. Lee

ICLR 2024

A Deep Reinforcement Learning Approach to Rare Event Estimation

A. Corso, K. Kim, S. Gupta, G. Gao, M. Kochenderfer

arXiv preprint arXiv:2211.12470

BEHAVIOR-1K: A Benchmark for Embodied AI with 1,000 Everyday Activities and Realistic Simulation

C. Li, C. Gokmen, G. Levine, R. Mart´ın-Mart´ın, S. Srivastava, C. Wang, J. Wong, R. Zhang, M. Lingelbach, J. Sun, M. Anvari, M. Hwang, M. Sharma, A. Aydin, D. Bansal, S. Hunter, K. Kim, A. Lou, C. Matthews, I. Villa-Renteria, J. Tang, C. Tang, F. Xia, S. Savarese, H. Gweon, K. Liu, J. Wu, F.-F. Li

CoRL 2022 (oral)

Using Machine Translation to Localize Task Oriented NLG Output.

S. Roy, C. Brunk, K. Kim, J. Zhao, M. Freitag, M. Kale, G. Bansal, S. Mudgal, C. Varano

arXiv preprint arXiv:2107.04512

Taskmaster-1: Toward a Realistic and Diverse Dialog Datase

B. Byrne, K. Krishnamoorthi, C. Sankar, A. Neelakantan, D. Duckworth, S. Yavuz, B. Goodrich, A. Dubey, A. Cedilnik, K. Kim

EMNLP-IJCNLP 2019

Self-Refining Language Model Anonymizers via Adversarial Distillation

K. Kim, H. Jeon, J. Shin

arXiv preprint arXiv:2506.01420

Personalized Language Models via Privacy-Preserving Evolutionary Model Merging

K. Kim, J. Shin, J. Kim

EMNLP 2025

Mamba Drafters for Speculative Decoding

D. Choi, S. Oh, S. Dingliwal, J. Tack, K. Kim, W. Song, S. Kim, I. Han, J. Shin, A. Galstyan, S. Katiyar, S. B. Bodapati

EMNLP 2025 Findings
ICML 2025 Workshop on Efficient Systems for Foundation Models (ES-FoMo III)

Learning to Contextualize Web Pages for Enhanced Decision Making by LLM Agents

D. Lee, J. Lee, K. Kim, J. Tack, J. Shin, Y. W. Teh, K. Lee.

ICLR 2025

Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning

J. Nam, K. Kim, S. Oh, J. Tack, J. Kim, J. Shin

NeurIPS 2024

Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback

K. Kim, A. Seo, H. Liu, J. Shin, K. Lee

EMNLP 2024 Findings

Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models

K. Kim, J. Jeong, M. An, M. Ghavamzadeh, K. Dvijotham, J. Shin, K. Lee

ICLR 2024

Adaptive Algorithms for Efficient Risk Estimation of Black-Box Systems

K. Kim

MS thesis, Stanford University

A Deep Reinforcement Learning Approach to Rare Event Estimation

A. Corso, K. Kim, S. Gupta, G. Gao, M. Kochenderfer

arXiv preprint arXiv:2211.12470

BEHAVIOR-1K: A Benchmark for Embodied AI with 1,000 Everyday Activities and Realistic Simulation

C. Li, C. Gokmen, G. Levine, R. Mart´ın-Mart´ın, S. Srivastava, C. Wang, J. Wong, R. Zhang, M. Lingelbach, J. Sun, M. Anvari, M. Hwang, M. Sharma, A. Aydin, D. Bansal, S. Hunter, K. Kim, A. Lou, C. Matthews, I. Villa-Renteria, J. Tang, C. Tang, F. Xia, S. Savarese, H. Gweon, K. Liu, J. Wu, F.-F. Li

CoRL 2022 (oral)

Using Machine Translation to Localize Task Oriented NLG Output.

S. Roy, C. Brunk, K. Kim, J. Zhao, M. Freitag, M. Kale, G. Bansal, S. Mudgal, C. Varano

arXiv preprint arXiv:2107.04512

Taskmaster-1: Toward a Realistic and Diverse Dialog Datase

B. Byrne, K. Krishnamoorthi, C. Sankar, A. Neelakantan, D. Duckworth, S. Yavuz, B. Goodrich, A. Dubey, A. Cedilnik, K. Kim

EMNLP-IJCNLP 2019

Finding Overlapping Communities From Subspaces

D. Bindel, P. Chew, J. Hopcroft, K. Kim, C. Ponce

Technical Report 2011

Vitæ

Full Resume in PDF.

Website Design

This website is based on this wonderful GitHub repo.